本地AI模型
LiberaGPT使用紧凑、优化的AI模型,完全在您的iPhone上离线运行。只需下载您需要的模型。每个模型都有其独特优势,并通过平衡质量与速度的量化针对神经引擎进行了优化。
选择您需要的模型,下载后离线使用。

可用模型
六个量化模型(Q4_K)通过llama.cpp优化本地推理。在设置中下载所需内容。 针对A19 Pro芯片优化。
Model Benchmarks
This benchmark evaluates the five small language models available in LiberaGPT under actual mobile inference conditions rather than cloud-hosted assumptions. The test context matters. LiberaGPT presents itself as a fully on-device iPhone system using 4-bit quantised GGUF models, with no cloud execution, and states that it is optimised for A18 and A19 Pro class devices with Neural Engine and Metal GPU acceleration. On the Apple side, iPhone 17 Pro is built on the A19 Pro, not the base A19, and Apple lists a 16-core Neural Engine together with Neural Accelerators integrated into each GPU core. The correct framing is therefore constrained local inference on iPhone 17 Pro hardware, not generic model performance in the abstract.
Method
Each model was tested against the same ten prompts across five categories: reasoning, factual knowledge, instruction following, code, and judgment under ambiguity. Each category contained two prompts. Every response was scored on eight criteria: factual correctness, reasoning validity, completeness, instruction compliance, clarity, precision, hallucination resistance, and efficiency. Category-specific weighting was then applied so that reasoning tasks rewarded logic, factual tasks rewarded accuracy and fabrication control, instruction tasks rewarded constraint obedience, code tasks rewarded technical correctness, and judgment tasks rewarded sound prioritisation.
Scores were normalised to 100. Category scores are the mean of the two prompts in that category. Overall score is the mean across all five category scores. Hard failures materially reduced scores where the response was truncated, contradicted itself, failed explicit formatting constraints, or produced content that was plainly wrong.
Top Line Result
SmolLM3 was the strongest model in the benchmark and the only system that performed at a high level across all major categories without a major collapse. Phi-4 Mini ranked second and was the most controlled model on strict instruction-following tasks. AceInstruct ranked third and remained strong in code and general knowledge, but it was less reliable under tighter constraint pressure and weaker on the second reasoning problem.
EXAONE Deep showed real upside in knowledge and judgment, but repeated output-control failures materially reduced its operational score. StableLM Zephyr was the weakest model in the set, with major failures in basic reasoning and factual reliability.
Overall Weighted Score
Ranked by overall weighted performance across all five benchmark categories.
Scored Results
| Model | Overall | Grade | Reasoning | Knowledge | Instruction | Code | Judgment |
|---|---|---|---|---|---|---|---|
| SmolLM3 (3.0B) | 88.6 | Best in cohort | 93.1 | 86.3 | 76.6 | 95.3 | 91.6 |
| Phi-4 Mini (3.8B) | 82.3 | High-performing | 72.8 | 75.1 | 90.4 | 93.3 | 79.9 |
| AceInstruct (1.5B) | 81.7 | High-performing | 70.6 | 85.7 | 78.8 | 92.5 | 80.7 |
| EXAONE Deep (1.2B) | 63.4 | Limited utility | 64.1 | 84.3 | 38.1 | 49.3 | 81.3 |
| StableLM Zephyr (1.6B) | 48.6 | Weak | 15.0 | 43.1 | 49.4 | 62.8 | 72.9 |
Category Comparison
Performance breakdown across five benchmark categories. Higher bars indicate better performance.
Consistency by Model
Standard deviation of category scores. Lower is better. Green = highly consistent, yellow = moderate variance, red = erratic.
Number of Weak Responses
Count of question-level scores below 50. Green = no failures, yellow = 1-2 failures, red = 3+ failures.
Findings
1. Best overall model
SmolLM3 was the best model in the test set. It led the benchmark overall and finished first in reasoning, knowledge, code, and judgment. Its main weakness was strict format compliance. It missed some tight instruction constraints, but unlike the weaker models it did not collapse on logic, code, or factual grounding. It is the clearest candidate for default on-device use where broad reliability matters more than one specialised strength.
2. Best controlled model
Phi-4 Mini was the cleanest model under explicit constraints. It scored first in instruction following and remained near the top in code. It was not the strongest factual model and it lost ground on the astronomy question, but it was disciplined, technically competent, and less erratic than EXAONE Deep or StableLM. For tasks where structure matters as much as content, Phi-4 Mini has a strong claim.
3. Strong but slightly brittle performer
AceInstruct finished close behind Phi-4 Mini. It performed very well in code and general knowledge and was generally clear. Its weaker points were exact compliance and combinatorial reasoning. It handled straightforward tasks well, but when the prompt demanded precise bounded output or a tighter search strategy, its performance dropped.
4. High upside, weak output control
EXAONE Deep is the most split model in the set. On substance alone it can look strong. Its knowledge and judgment scores were competitive, and some final answers were good. The problem was operational reliability. Several responses exposed internal reasoning text, one code answer never reached a usable solution, one instruction task failed completely, and one reasoning answer was cut off mid-stream. In a production setting that is a serious weakness. The issue is not intelligence alone; it is answer control.
5. Weakest benchmarked model
StableLM Zephyr was the weakest model overall. It failed the simple price-calculation task, failed the heavier-ball reasoning task, produced major factual errors in the astronomy answer, and missed strict formatting requirements badly. It was not unusable everywhere; some judgment and code responses were serviceable. But the benchmark does not support positioning it as a reliable default model inside a high-trust mobile product.
What the Benchmark Shows
The main result is that parameter count alone did not determine usefulness. The best-performing model here was not the largest, and the weakest model was not the smallest. The decisive factors were reasoning stability, factual restraint, and output control under mobile constraints.
The second result is that format discipline is a real separator in small on-device models. Phi-4 Mini won that category clearly. SmolLM3, despite winning overall, still lost points where the task required exact bullet lengths or strict word-count boundaries.
The third result is that output control matters as much as intelligence in a mobile app. EXAONE Deep produced some strong underlying content, but repeated chain leakage and truncation made it harder to trust operationally than the raw category scores alone would suggest.
Question-Level Performance Heatmap
| Question | SmolLM3 | Phi-4 Mini | AceInstruct | EXAONE Deep | StableLM |
|---|---|---|---|---|---|
| R1 | 99.4 | 98.8 | 99.4 | 90.0 | 18.1 |
| R2 | 86.9 | 46.9 | 41.9 | 38.1 | 11.9 |
| K1 | 77.9 | 66.5 | 93.2 | 88.2 | 41.5 |
| K2 | 94.7 | 83.8 | 78.2 | 80.3 | 44.7 |
| I1 | 78.8 | 89.4 | 87.9 | 27.6 | 49.7 |
| I2 | 74.4 | 91.5 | 69.7 | 48.5 | 49.1 |
| C1 | 95.3 | 95.3 | 93.1 | 13.3 | 82.5 |
| C2 | 95.3 | 91.4 | 91.9 | 85.3 | 43.1 |
| J1 | 90.9 | 74.1 | 81.2 | 87.4 | 65.6 |
| J2 | 92.4 | 85.6 | 80.3 | 75.3 | 80.3 |
R = Reasoning, K = Knowledge, I = Instruction, C = Code, J = Judgment. Green = excellent (≥90), Turquoise = good (≥75), Yellow = acceptable (≥50), Red = poor (<50).