Meta
Meta: Llama 3.3 70B Instruct
in Europe
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks. Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. [Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md)
meta-llama/llama-3.3-70b-instruct 131K
Context Window
16K
Max Output
60
Speed
40
Intelligence
Capabilities
Model Information
- Provider
- Meta
- Parameters
- 70B
Integration
Start with one API call
OpenAI-compatible API. Use the same SDK you already know. Just swap the model parameter.
curl https://api.supa.works/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-d '{
"model": "meta-llama/llama-3.3-70b-instruct",
"messages": [{"role": "user", "content": "Hello!"}]
}'meta-llama/llama-3.3-70b-instruct Why SUPA
Why host on SUPA?
European Data Residency
All data processed and stored exclusively on European servers. Full GDPR compliance.
OpenAI-Compatible API
Use the same SDK and code you already have. Just change the base URL.
No Infrastructure
No GPUs to manage, no scaling to worry about. Just API calls.
No Data Training
We never use your API requests to train models. Your data stays private.
Start using Meta: Llama 3.3 70B Instruct
Free sandbox tier available. No credit card required.
Get API Access