This is a simplified guide to an AI model called Deepseek-R1-70b maintained by Lucataco. If you like these kinds of analysis, you should join AImodels.fyi or follow us on Twitter.
The deepseek-r1-70b model represents DeepSeek's first-generation reasoning model, which matches the performance of OpenAI-o1. Created by lucataco, this model builds on research and development in large-scale language models and leverages knowledge from models like deepseek-r1 and deepseek-vl-7b-base.
Model Inputs and Outputs
The model accepts text prompts and generates responses with configurable parameters to control the output generation process.
Inputs
- prompt: Text input for the model to process
- temperature: Controls output randomness (0-2)
- top_p: Controls output diversity (0-1)
- repeat_penalty: Reduces input token repetition (0-2)
- max_tokens: Limits generation length (1-8192)
- seed: Sets randomization seed (0 for random)
Outputs
- Text responses: Generated text that follows the input prompt and instructions
- Iterator format: Outputs as concatenated array of strings
Capabilities
The system excels at mathematical reaso...
Top comments (0)