Home/Blog/News & Updates/Meta launches new AI tool to take on ChatGPT
Home/Blog/News & Updates/Meta launches new AI tool to take on ChatGPT

Meta launches new AI tool to take on ChatGPT

Fahim ul Haq
Sep 25, 2023
8 min read

AI remains at the forefront of tech news with the ever-expanding roster of AI coding assistants. If you haven't heard, Meta recently launched its own assistant called Code Llama. They also created special variants specifically optimized for natural language prompts and for Python. This is particularly exciting news for those interested in learning how to code — it's never been easier to get started on your own projects.

Today I want to take a look at Code Llama and compare it to other dev-focused AI tools on the market and share some tips on how to get the most out of them. Though built with a similar purpose, each of these models have a certain degree of variability in pricing, out-of-the-box functionality, and public access. I've also included a cheat sheet for breaking down some new and popular AI assistant models at a glance. 

Who knows — maybe you’ll find yourself a new coding assistant!

If you’re interested in learning how to use AI tools more effectively, or would like a deeper understanding of the intricacies behind code assistants like Code Llama, check out our Prompt Engineering Course. By the end, you'll have a foundational knowledge of generative AI and prompt engineering, and you'll be able to craft prompts that elicit accurate and nuanced responses from a range of tools.

Cover
All You Need to Know About Prompt Engineering

Prompt engineering means designing high-quality prompts that guide machine learning models to produce accurate outputs. It involves selecting the correct type of prompts, optimizing their length and structure, and determining their order and relevance to the task. In this course, you’ll be introduced to prompt engineering, a form of generative AI. You’ll look at an overview of prompts and their types, best practices, and role prompting. Additionally, you’ll gain a detailed understanding of different prompting techniques. The course will also explore productivity prompts for different roles. Finally, you will learn to utilize prompts for personal use, such as preparing for interviews, etc. By the end of the course, you will have developed a solid understanding of prompt engineering principles and techniques and will be equipped with the skills and knowledge to apply them in their respective fields. This course will help to stay ahead of the curve and take advantage of new opportunities as they arise.

7hrs
Beginner
2 Quizzes
128 Illustrations

(Note: Educative offers its own AI-assisted learning feature! It's currently available in our Grokking Coding Interview Patterns courses. If you haven't had a chance to check it out yet, give it a shot and let me know what you think.)

Without further ado, let's get the rundown on Code Llama.

Code Llama#

  • Developed by Meta

  • Free

  • 3 models: general, Python, and natural language

  • Available for research and commercial use

  • Open-source

This large language model (LLM) accepts both natural language and code in prompts and can return a mix of both as output. Built on top of Llama 2, Meta’s general model, Code Llama consists of three different offerings: Code Llama (the foundational code model), Code Llama - Python (specialized for Python programming), and Code Llama - Instruct (fine-tuned for understanding natural language instructions). Each model varies in parameters– a metric used to determine how AI tools process data and generate predictions. The base, instruct, and Python models have 7B, 13B, and 34B parameters, respectively. As a general rule of thumb, more parameters mean that a model is more sophisticated. 

Meta’s research paper sheds light on Code Llama's development, limitations, and guidelines for responsible use. They note that Code Llama's performance demonstrated superior results compared to other open-source code-specific LLMs: “On multilingual benchmarks, even our smallest model (Code Llama 7B) outperforms every other public model.” Code Llama's models feature a high degree of context, allowing for more relevant code generation. This is most apparent when debugging large codebases.

As of this writing, Code Llama is free for research and commercial use and consists of an open source code. Meta states: “​We follow the approach of learning from publicly available code only, without additional meta-level or temporal information such as issues or commits.” 

Top AI Code Assistants
Top AI Code Assistants

GPT4#

  • Developed by OpenAI

  • $20 per month

  • 1.7T parameters

  • Available to the public

  • Closed-source

ChatGPT, developed by OpenAI, remains the gold standard for AI chatbots— producing helpful results more accurately and frequently than any other of its kind. As for coding, that gold standard relies more heavily on use cases, community adoption, and individual preferences. Similar to Code Llama, the new GPT4 model can understand both code and natural language — making the platform more accessible for those learning to code. Access requires a subscription to GPT Plus, which costs $20 per month. GPT4 is not open-source; users will not have access to its code, model architecture, or data. This makes it especially challenging to reliably reproduce results. 

Github Copilot#

  • Developed by Github

  • $10 per month

  • 12B parameters

  • Closed-source

Half the monthly price of GPT4, GitHub Copilot is an AI pair programmer that offers suggestions on the go. A GitHub personal account is required in order to gain access. Advertised as an assistant that aids the speed of writing, Github Copilot makes autocomplete suggestions while writing code. It's designed to improve developer productivity, reduce coding errors, and help with code exploration and learning. Github Copilot does come with some potential drawbacks in reliability, code quality, privacy, and licensing concerns. Its effectiveness may also depend on the specific use case and the developer's skill level. Some users claim the AI assistant provides too many suggestions at once, which some may find distracting. 

Github’s rumored enhancement to Copilot, Copilot X, remains vaporware. This LLM is planned to achieve context aware conversation and more personalized answers. There’s currently a waitlist for full access, but a technical preview is available on Github’s website. 

Amazon CodeWhisperer#

  • Developed by Amazon

  • Free

  • Size unknown

  • Available for public use

  • Closed-source

Amazon's AI assistant, CodeWhisperer, functions much like GitHub Copilot, generating personalized code suggestions in real-time. What sets it apart is its strong focus on security; it excels in identifying and resolving potential security issues in your code, making it a valuable tool for writing secure applications. It's worth noting that, similar to GPT4, CodeWhisperer is available for public use but is not open-source, allowing developers to leverage its capabilities while keeping its core functionality proprietary.

Duet AI#

  • Developed by Google

  • $30 per month

  • Size unknown 

  • Available for pre-order for enterprise customers 

Duet AI represents Google's venture into the realm of LLMs. While this AI assistant is not yet publicly available, it offers a pre-order option for enterprise customers at a competitive rate of $30 per month. Currently in its preview phase, the Google AI assistant exhibits potential, although it does come with its fair share of limitations. Notably, it currently only accepts English dialect inputs, which may restrict its accessibility for some users. However, similar to several other LLMs, Duet AI excels at processing natural language inputs and comes equipped with a versatile chat assistant, underscoring its promise as a tool for enhancing various aspects of language-related tasks and interactions.

Salesforce CodeGen#

  • Developed by Salesforce

  • Free

  • 7B parameters

  • Not publicly available

  • Closed-source 

Developed by Salesforce, Codegen stands out as an intuitive and conversational LLM. What sets it apart from the rest is its compact size. At 7B parameters, Codegen is the smallest LLM on this list. However, developers who have received early access to the tool  have lauded its remarkable performance despite its smaller footprint, emphasizing the "small but mighty" nature of this AI tool. While it promises to be accessible to a wide audience by eventually offering free access, it's important to note that Codegen is not yet available to the public, leaving developers anticipating its release and the potential it holds.

StarCoder#

  • Developed by ServiceNow and Hugging Face

  • Available for lifetime purchase in Lite ($50), Plus($80), or Pro($125)

  • 15.5B parameters

  • Available to the public

  • Open-source

Introduced in 2023, StarCoder was developed by ServiceNow and Hugging Face as a large language model specifically for code. Three versions of StarCoder exist: Lite, Plus, and Pro, each tailored to meet varying user needs. With this assistant, users purchase a lifetime plan for either $50, $80, or $125, respectively, providing flexibility in choosing the features and capabilities that suit their requirements best. However, it's important to note that, like many new language models, StarCoder relies heavily on training data and may continue to evolve and improve as it gains more usage and training, making it an exciting tool for developers to watch in the evolving landscape of AI-driven coding assistance.

It should be noted that there are many other AI assistants that I wasn’t able to break down here, with new models continuously being introduced. I will be curious to see what new tools are developed in the near future, as well as how currently popular models like ChatGPT continue to evolve.

Remember: AI assistants enhance (not replace) programmer productivity#

Of course, we're continuing to hear the same anxieties and questions about the future of software development: will tools like Code Llama take developer jobs?

The answer remains a resounding no, and here’s why.

When discussing these AI assistants, it’s important to hold an emphasis on the word “assistant.” As powerful as they are, they remain to be an amped-up version of autocomplete.

As programmers, in order to solve a problem, we must decide what tools to leverage, and then translate the solution into a programming language that a compiler can interpret.

These new tools can’t solve the problem for us. But where they do excel is at reducing tedium.

The grunt work of producing the boilerplate code is now extended to full functions and relatively big programs. Where we once had to go line-by-line, these tools can help us quickly scale to tens or even hundreds of lines.

While it is remarkable to see computer programs generate fully functional code, ultimately these tools are simply time savers, and they're great when it comes to writing simple, straightforward code. However, you still have to manually tweak it and optimize performance. These small nuances won’t go away anytime soon, but writing that draft version is becoming more and more convenient. 

It's essential to view these tools as collaborators, working alongside you to amplify your capabilities rather than as potential job replacements.

As AI assistants proliferate and reshape the software development landscape, the anxiety about their potential impact on software development as a career is best alleviated through embracing these tools as allies in a constantly evolving field. 

Even if you don’t plan on becoming an AI engineer, having the tool of prompt engineering in your toolbelt is essential regardless of your position. With our Prompt Engineering Course at Educative, you’ll not only optimize AI tools' performance but also ensure that they provide valuable insights and solutions tailored to your needs. 

Happy learning! 


  

Free Resources