Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

GameTranslate

In-game translator at your disposal · By Godnoken

Free LLM API resources + API Setup Guide Sticky

A topic by norby777 created 39 days ago Views: 572 Replies: 4
Viewing posts 1 to 4
(2 edits)

Hi, 

I wanted to share this GitHub page I found, which gathers a collection of free and trial LLM APIs. "a list of free LLM inference resources accessible via API." "This lists various services that provide free access or credits towards API-based LLM usage."

If you're looking for free APIs, you'll find plenty here that can be used for translation. You might even check the GitHub page's forks, as there could be even more options available!

I've put together a small, dedicated guide for these APIs. I’ll explain how to set up each free API for GameTranslate, plus a few extras that aren’t in the list. If I messed something up or wasn’t 100% accurate, sorry about that. This guide is mainly for people who aren't particularly tech-savvy, users who aren't familiar with technical setups or who need a straightforward walkthrough to get things set up.

API Setup Guide

1. Openrouter: (e.g., xAI: Grok 4 Fast - free)

API Key: Click your profile (top right) -> settings -> Create API Key -> Name it -> Copy it now and save it somewhere safe, as you won't see it again. Done!

You can find loads of other free APIs by searching for 'free' under the models section: Link

Code:

Endpoint URL: https://openrouter.ai/api/v1/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body: 

{

    "model": "x-ai/grok-4-fast:free",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "stream": false,

    "reasoning": {

        "enabled": false

    }

}

Text Output Path: .choices[0].message.content To switch models, you just change the model's name in the Body. Get the model name from the API tab on the model's page, under the curl example, e.g.: "model": "x-ai/grok-4-fast:free". Other examples are deepseek/deepseek-chat-v3.1:free or qwen/qwen3-235b-a22b:free. Find what works best for you! 

Limits/Pricing: Free models are limited to 20 requests per minute. There's a 50 request per day limit if you haven't bought credits, or 1000 requests per day if you buy at least 10 credits. Link

2. Google AI Studio/Gemini

API Key: Head to Projects (left menu) -> Click Create a new project (top right) -> Give it a name -> Create project -> Go to API keys (left menu) -> Click Create API Key (top right) -> Name it -> Select your new project -> Click Create Key. That should do it!

Code:

Endpoint URL: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-lite:generateContent

Headers:

{

    "Content-Type": "application/json",

    "X-goog-api-key": "your api key"

}

Body:

{

    "contents": [

        {

            "parts": [

                {

                    "text": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

                }

            ]

        }

    ],

    "generationConfig": {

        "thinkingConfig": {

            "thinkingBudget": 0

        }

    }

}

This already includes reasoning being disabled, so you can safely use reasoning models as well.

Text Output Path: candidates[0].content.parts[0].text

To switch to a different model, all you have to do is change the model's name directly in the Endpoint URL. Look for where it says models/ and replace the name right before :generateContent. For instance, you would change generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-lite:generateContent to generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-lite:generateContent.

You can see the full list of models on the official documentation page or here: Google AI Studio -> Top right click on the model, where it says e.g., Gemini 2.5 Pro and you can see them there too. Crucially, remember to use the hyphenated version of the model name, like gemini-2.5-pro, not the standard name like Gemini 2.5 Pro.

Limits/Pricing: You can check the rate limits either on the GitHub page or on Google's official Free Tier documentation.

3. Nvidia NIM

API Key: You'll need to verify your phone number first. Look for the 'verify' link at the top when you sign up to enter your number. -> You can get the key from the top right: Get API Key or via your profile under API Keys -> Generate API Key -> Name it and set the Expiration (1 year max). -> Click Generate Key. Important: Make sure to save your API key right away, because "This is the only time your key will be displayed. This key is for API testing use only and is valid for 1 year."

Code:

Endpoint URL: https://integrate.api.nvidia.com/v1/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

} Body:

{

    "model": "qwen/qwen3-next-80b-a3b-instruct",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text% /no_think"

        }

    ],

    "stream": false

} Text Output Path: .choices[0].message.content

To switch models, you just change the model's name in the Body. Models: Check the documentation here. Select a model from the left, click to expand, and then find the model part in the code snippet on the right, for example: "model": "qwen/qwen3-coder-480b-a35b-instruct". Paste that name in. 

Limits/Pricing: 40 requests per minute

4. Mistral (La Plateforme)

API Key: In the left-side menu, go to API Keys -> Choose a Plan -> Select Experiment for free -> Click Subscribe -> You'll need to enter your phone number. -> Then, go back to API Keys -> Create new key -> Name your key and click Create new key again. You'll see this warning: "API key successfully created. Please copy it now, it will not be shown again. Note that it may take a few minutes to be usable."

Code:

Endpoint URL: https://api.mistral.ai/v1/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "model": "magistral-small-2509",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "stream": false,

    "prompt_mode": null

}

Text Output Path: .choices[0].message.content

To switch models, simply update the model's name in the Body section. You can find the available models on their documentation page.

Limits(per-model)/Pricing: 1 request/second, 500,000 tokens/minute, 1,000,000,000 tokens/month Link

5. Mistral (Codestral)

This is on the same website as the previous one. Just find Codestral in the left menu -> Click Request Access -> Check the box and Accept and request access.  

API Key: Under the Codestral section -> Click Generate API Key. You're all set.

Code:

Endpoint URL: https://codestral.mistral.ai/v1/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "model": "codestral-2508",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "stream": false,

    "prompt_mode": null

}

Text Output Path: .choices[0].message.content

To switch models, simply update the model's name in the Body section. You can find the available models on their documentation page.

Important: This API will only work with Codestral models, such as codestral-2508.

Limits(per-model)/Pricing: You're limited to 30 requests per minute, and 2,000 requests per day.

6. HuggingFace Inference Providers e.g: Nebius AI

Token Key: Go to your profile settings (top right) -> Access Tokens (left menu) -> Create new token -> Choose Read or Write access and name it -> Create Token. Remember this warning: "Save your token value somewhere safe. You will not be able to see it again after you close this modal. If you lose it, you'll have to create a new one."

Code:

Endpoint URL: https://router.huggingface.co/v1/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "model": "meta-llama/Llama-3.1-8B-Instruct:nebius",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "stream": false

}

Text Output Path: .choices[0].message.content

If you want to use a different model, you just change the model name in the Body. Models: You can find the list of partners providing APIs here. A simpler view is on the models page under the Inference Providers filter. To find the model ID: Select a model (e.g., meta-llama / Llama-3.1-8B-Instruct), then click Deploy on the right -> Inference Providers -> Look for the model string, like: "model": "meta-llama/Llama-3.1-8B-Instruct:nebius". Copy that into your app. Limits/Pricing: You get $0.10/month in credits. See the pricing page for details.

7. Vercel AI Gateway

API Key: You'll need to enter your credit card details during registration, but relax—it won't charge you anything unless you decide to upgrade. Like DeepL, it just stops working if you hit your limit. Steps: Profile (top right) -> Dashboard -> AI Gateway -> Create an API Key -> Create Key -> Name it and Create Key. Make sure to save the key because Vercel warns: "Save this key securely—it won't be shown again. Keep it safe, as anyone with access can make requests on your behalf."

Code:

Endpoint URL: https://ai-gateway.vercel.sh/v1/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "model": "xai/grok-4-fast-non-reasoning",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "stream": false

}

Text Output Path: .choices[0].message.content

To switch models, you simply change the model's name in the Body. You can browse all the available models here. Pick one, like xai/grok-4-fast-non-reasoning, and the correct model name will be right there for you to use.

Limits/Pricing: Link. Free Tier Details: Every Vercel team account gives you $5 of free usage per month to play around with the AI Gateway at no initial cost. Here’s how the free tier works: You get a $5 credit every 30 days after your first request. This credit works across their entire model catalog. You can stay on the free tier indefinitely as long as you don't buy extra credits. If you move to a paid tier: Once you purchase credits, your account switches to a pay-as-you-go model. You won't get the $5 monthly free credit anymore, but you'll have more capacity.

8. Cerebras

API Key: The key is created automatically when you register, and you can find it later in the left menu under the API keys tab.

Code:

Endpoint URL: https://api.cerebras.ai/v1/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "model": "qwen-3-235b-a22b-instruct-2507",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text% /no_think"

        }

    ],

    "stream": false

}

Text Output Path: .content

To switch models, just change the model's name in the Body. Models: You'll need the Model ID which is listed here. If you click on the models, you'll see settings that let you disable "reasoning" (the model's internal thinking process). For example, Qwen 3 235B Instruct only supports "non-thinking mode," so you won't see any <think></think> tags. 

For Qwen 3 32B, you can still disable the reasoning for speed by adding /no_think to your prompt (e.g., Tell me about cats /no_think). A heads-up though: Even with reasoning turned off, the empty <think></think> tags will still appear in your output. If you can live with the tags being there, the speed boost makes this a great option.

Limits/Pricing:  You can find the specific limits on the GitHub page or by checking the details for each model on the official overview page.

(2 edits)

9. Groq

API Key: Look for Api Keys in the top right -> Click Create API Keys -> Name it and hit Submit. A critical step: "Your new API key has been created. Copy it now, as we will not display it again." Make sure you save it!

Code:

Endpoint URL: https://api.groq.com/openai/v1/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text% /no_think"

        }

    ],

    "model": "qwen/qwen3-32b",

    "stream": false,

    "include_reasoning": false

}

Text Output Path: .choices[0].message.content

To switch models, just change the model's name in the Body. Models: Find the full list on the GitHub page, Official site, or directly in the playground. You can usually disable "reasoning" (the model's thinking process) for speed using flags like /no_think or "include_reasoning": false. For instance, you'd use "include_reasoning": false for qwen/qwen3-32b. Be aware: Some models (like moonshotai/kimi-k2-instruct-0905) are non-reasoning by default, so you might need to remove "include_reasoning": false if it’s there, just to get them working properly.

Limits/Pricing: Check the GitHub page or the official rate limits documentation.

10. Together (Free)

I haven't personally tested this one because it requires adding a credit card and topping up with $5. However, based on the documentation, the setup process should be the same as the others on the list.

The GitHub resource indicates that after the initial $5 payment, you gain access to two free models: https://www.together.ai/models/deepseek-r1-distilled-llama-70b-free and https://www.together.ai/models/llama-3-3-70b-free.

The API structure is very similar for both:

Pl: Url: https://api.together.xyz/v1/chat/completions

Headers: Authorization: Bearer $TOGETHER_API_KEY" \

"Content-Type: application/json" \

Body: {

    "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-70B-free",

    "messages": [

      {

        "role": "user",

        "content": "Your prompt"

      }

    ]

}

Limits/Pricing: Up to 60 requests/minute

11. Cohere

API Key: The key is automatically generated the moment you sign up. You can find it later under the API keys tab in the left-hand menu.

Code:

Endpoint URL: https://api.cohere.ai/v2/chat

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "model": "command-a-translate-08-2025",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "stream": false,

    "thinking": {

        "type": "disabled"

    }

}

Text Output Path: .text

To switch models, just change the model's name in the Body. Models: Check the official documentation or look in the model section of the Playground. You can disable reasoning for models like command-a-reasoning-08-2025 by including the parameter: thinking: {"type": "disabled"}.

Limits/Pricing: The limits are 20 requests per minute and 1,000 requests per month. For more details, see the official docs.

12. Github

Token Key: Go to your profile Settings (top right) -> In the left menu, scroll to the bottom and select Developer settings -> Choose Personal access tokens and then Tokens (classic) -> Click Generate new token (top right) -> Select Generate new token (classic) -> Give it a name and set the Expiration -> Click Generate token at the bottom. Crucial: "Make sure to copy your personal access token now. You won’t be able to see it again!"

Code:

Endpoint URL: https://models.github.ai/inference/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "model": "meta/Meta-Llama-3.1-8B-Instruct",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "stream": false

}

Text Output Path: .choices[0].message.content

To switch models, just change the model's name in the Body. Models: You can browse the models on the GitHub Marketplace. To find the exact model ID: click on the model -> Go to the Playground tab at the top -> Click Code -> Look for the model string, e.g.: "model": "meta/Meta-Llama-3.1-8B-Instruct".

Limits/Pricing: Be aware that the input/output token limits are extremely restrictive. The actual limits depend on your Copilot subscription tier (Free/Pro/Pro+/Business/Enterprise). More details can be found in the documentation.

13. Cloudflare

API Key: Click the small person icon (top right) and profile -> In the left-hand menu API Tokens -> Create Token -> Workers AI -> Name it -> Include - All accounts -> Continue to summary -> Create Token -> "Copy this token to access the Cloudflare API. For security this will not be shown again."

CLOUDFLARE_ACCOUNT_ID: Link ->Account home-> Click the three small buttons next to the 'Account' text ->Copy account ID and this should be pasted into the URL.

Code:

Endpoint URL: https://api.cloudflare.com/client/v4/accounts/CLOUDFLARE_ACCOUNT_ID/ai/run/@cf/meta/llama-3.1-8b-instruct

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "stream": false

}

Text Output Path: .result.response

To switch models, just change the model's name in the Endpoint URL. Models: Browse the list here. When you pick a model (e.g., llama-3.1-8b-instruct), copy the entire Model ID (e.g., "@cf/meta/llama-3.1-8b-instruct") and insert it into your Endpoint URL right after the /run/ segment. Example URL structure: .../ai/run/@cf/meta/llama-3.1-8b-instruct

Limits/Pricing: Your free allocation is 10,000 neurons per day. You can find more details on their pricing page.

14. Google Cloud Vertex AI

This is part of the Google Cloud, but I wasn't able to get it working, so I skipped it.

Starting now, I'll be sharing a few extra APIs that weren't on the original list. I thought you might find them useful too!

15. Azure AI Translator

Warning: You will need to provide your credit card details to use this service.

Setup Steps: Use the search bar to find Translators -> Click Create -> Enter your details (name, region) and select the F0 free tier -> Hit 'create' again to finish.

API Key: Navigate to the new Translator service you just created -> In the left menu, you'll find everything you need under Keys and Endpoint.

Code:

Endpoint URL: https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=YOUR_LANGUAGE_CODE_e.g.:en

Headers:

{

    "Content-Type": "application/json",

    "Ocp-Apim-Subscription-Key": "YOUR API KEY",

    "Ocp-Apim-Subscription-Region": "YOUR REGION (under Keys and Endpoint)"

}

Body:

[

    {

        "Text": "%text%"

    }

]

Text Output Path: .translations[0].text

Limits/Pricing: The F0 Tier gives you a generous 2 million characters per hour. The system automatically enforces this limit, so once you reach 2 million characters within an hour, the service will simply stop working until the next hour begins. Link, Link.

16. Z.AI

API Key: Click your profile (top right) -> Go to API Keys -> Select Create a new API key -> Give it a name and hit Confirm. That's all there is to it.

Based on what I've seen, it looks like there's only one free model available, which is the flash version.

Code:

Endpoint URL: https://api.z.ai/api/paas/v4/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "model": "glm-4.5-flash",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text% /nothink"

        }

    ],

    "stream": false,

    "extra_body": {

        "chat_template_kwargs": {

            "enable_thinking": false

        }

    }

}

Text Output Path: .choices[0].message.content

Limits/Pricing: The free model, GLM-4.5-Flash, has a Concurrency limit of 2. "Explanation of Rate Limits:  To ensure stable access to GLM-4-Flash during the free trial, requests with context lengths over 8K will be throttled to 1% of the standard concurrency limit."Input Cached Input Cached Input Storage Output GLM-4.5-Flash Free Free Free Free You can find pricing and limit details in their documentation.


17. RapidAPI: Example (AIbit Translator) - translator-based

Code:

Endpoint URL: https://aibit-translator.p.rapidapi.com/api/v1/translator/text

Headers: 

{

    "Content-Type": "application/json",

    "x-rapidapi-host": "aibit-translator.p.rapidapi.com",

    "x-rapidapi-key": "YOUR RAPIDAPI KEY"

}

Body:

{

    "from": "auto",

    "to": "en",

    "text": "%text%",

    "provider": "google"

}

Text Output Path: .trans

Model Selection: When switching to a different model, you must always examine the model's specific code and configuration, as these details will vary. Pay close attention to the fact that the x-rapidapi-host will change every time, and the parameters required in the Body tab will also be different. (The x-rapidapi-key, however, will be generated for you automatically.)

Models: Translator Models or AI Models.

How to: Choose an API, like AIbit translator -> Navigate back to the main API page or click API Overview -> Select the Basic plan -> Start Free Plan -> Subscribe -> In the left menu, select the appropriate method (usually a POST request named 'Translate Text' for translation, or 'chat'/'model' for AI) -> Under Code Snippets, ensure you set the Target: Shell and Client: cURL -> Go to the Body tab and modify the language parameters and/or insert your prompt text -> The final cURL code snippet shown is what you need to copy into the app

Finding the Text Output Path: Click Test Endpoint (top right) to execute the request -> In the response area (check the Raw tab for clarity), find the translated text and note the preceding JSON structure -> E.g.: {"trans":"%text%" meaning here the output path will be .trans. Or for Deep Translate: data:translations:translatedText: 0:"%text%" meaning here it will be .data.translations.translatedText[0].

I hope it is somewhat understandable, but if you're struggling to figure out the output path, copy the response text and ask an AI tool; it should be able to identify the correct JSON path for you.

Limits/Pricing: Details are available by clicking on the Basic plan within the API Overview section.

18. RapidAPI: Example (Lingvanex Translate) - translator-based

This service is similar to the Google Cloud offering, but I wanted to highlight it specifically because of the 500,000 Characters per Month usage.

Warning: This could be a bit risky if you're not paying attention, because if you go over the 500,000 character/month limit, you'll automatically start paying + $0.000005. But don't worry, you can set up a Budget Alert to help manage your spending.

You must provide your credit card details to use this service.

The configuration is the same as the previous one.

Code:

Endpoint URL: https://lingvanex-translate.p.rapidapi.com/translate

Headers: 

{

    "Content-Type": "application/json",

    "x-rapidapi-host": "lingvanex-translate.p.rapidapi.com",

    "x-rapidapi-key": "YOUR RAPIDAPI KEY"

}

Body:

{

    "platform": "api",

    "from": "ja",

    "to": "en",

    "enableTransliteration": false,

    "data": "%text%"

}

Text Output Path: .result

Limits/Pricing: While the limit is 500,000 Characters per Month, it is not a hard limit that stops the service; instead, once you reach it, your account automatically transitions to a pay-as-you-go model. In this case: + $0.000005. Check the Basic plan details under API Overview for more information.

19. RapidAPI: Example (ChatGPT 4-chatgpt-42) - AI-Based

Same as above. For an AI model, you'll select a POST method on the left that is labeled chat or model. For this example, I'm choosing the Llama 3.3 70B Instruct model. You simply need to rewrite the Body section with your desired prompt, and the resulting cURL will be used.

Code:

Endpoint URL: https://chatgpt-42.p.rapidapi.com/conversationllama3

Headers: 

{

    "Content-Type": "application/json",

    "x-rapidapi-host": "chatgpt-42.p.rapidapi.com",

    "x-rapidapi-key": "YOUR RAPIDAPI KEY"

}

Body:

{

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "web_access": false

}

Text Output Path: .result

Limits/Pricing: This particular plan: Credit: 300 per month (Hard Limit) Tokens: 100,000 per month (Hard Limit) Requests: 300 per month (Hard Limit) Hourly Rate: 1,000 requests per hour. You can always find these details by clicking on the Basic plan information under API Overview.

20. LLM7

Token Key: Click Get Free Token at the bottom of the page -> Sign in -> Click ADD -> Name your token and set an expiration date -> Create.

Code:

Endpoint URL: https://api.llm7.io/v1/chat/completions

Headers: 

{

    "Content-Type": "application/json",

    "Authorization": "Bearer your api key"

}

Body:

{

    "model": "mistral-small-3.1-24b-instruct-2503",

    "messages": [

        {

            "role": "user",

            "content": "Your prompt goes here, for example Translate this text to English and only return the translated text: %text%"

        }

    ],

    "stream": false

Text Output Path: .choices[0].message.content

To switch models, you simply change the model's name in the Body. Models: You can find the list here, or use the 'Select model' dropdown on the homepage. A tip: You can try using models that aren't officially listed, like meta-llama/Llama-3.3-70B-Instruct; it might just work!

Limits/Pricing: The limits are tiered and listed at the bottom of the main page: 45 requests per minute (if you're anonymous and haven't signed up) 150 requests per minute (with a free token) 500+ requests per minute (on paid tiers) 

21. Google Translate API

Warning: This one requires attention! If you exceed the 500,000 character/month free limit, you will automatically be charged. Solution: It's highly recommended to set up a Budget Alert. For instance, you could set a total budget of $1 and receive email alerts when you hit 50%, 90%, 100%, and 150% of that tiny amount, giving you peace of mind. A credit card is required for usage (but $300 in free credits may be available).

API Key: Go to the console (top right) -> Under Quick access, select APIs & Services -> Credentials (left menu) -> Create credentials (top) -> API key -> Name it and Create.

Enabling the API: You also need to explicitly enable the service. In the left menu, go to Library -> Search for Cloud Translate API -> Enable. This will require you to set up a billing account first.

Code:

Endpoint URL: https://translation.googleapis.com/language/translate/v2?key=YOUR_API_KEY

Headers: 

{

    "Content-Type": "application/json"

}

Body:

{

    "q": "%text%",

    "source": "ja",

    "target": "en"

}

Text Output Path: .data.translations[0].translatedText

Limits/Pricing: The first 500,000 characters per month are free. After that, the cost is $20 per million characters. Link.

And that concludes the guide! If you found any mistakes, my apologies! :) And I hope everything made sense! The most important thing is that these APIs are confirmed to be working (or have worked) for me personally. :)

Developer

Legend! Thank you for putting in all this effort, I appreciate this so much. I will sticky note it.

Fyi for anyone reading - Prompts can vary a lot, and the correct one is always decided by the API/model. Please look for information there first, if you can't get it working, comment here or make a post in the forum. Thank you!

You're welcome!

I was curious about this stuff and wanted to test it all out anyway, so I figured I might as well write up my findings and share them here.