<aside> 💡
This page showcases comparative performance analyses of various Large Language Models (LLMs) specifically focused on their code-related capabilities, evaluating their effectiveness in understanding, generating, debugging, and documenting code across different programming languages and frameworks.
</aside>
For the first round of this test, I’ll use useCoookies
, a composable from one of my personal Nuxt 3 projects. It’s a TypeScript composable that fetches data from an endpoint that provides browser cookies in a JSON format. The code is working but is far from being scalable or maintainable:
import { useCookiesData, normalizeString } from "#imports";
export const useCookies = (): Cookies => {
const get = async (id: string): Promise<Record<string, any>> => {
try {
id = normalizeString(id);
return await useCookiesData(`/${id}`, {
headers: {
"Content-Type": "application/json",
Authorization: "Bearer " + process.env.COOKIES_API_KEY!,
},
});
} catch (error) {
console.error(error);
return {};
}
};
return {
get,
};
};
Alternatively, you can download the code file here:
Table of Contents
The following models are used for each test with their default parameter: