I am a Master's student in Logic at Sun Yat-sen University, exploring Neuro-Symbolic AI. 我是中山大学逻辑学硕士,致力于探索逻辑推理与深度学习的结合。
- 🔭 Current Focus: LLM Reasoning (CoT/ToT), Parameter-Efficient Fine-Tuning (QLoRA).
(主攻方向:大模型推理链、参数高效微调) - 🧠 Research Interest: AI Alignment & Logical Consistency.
(研究兴趣:AI对齐、逻辑一致性、反事实推理) - 🛠️ Tech Stack: PyTorch, Hugging Face, Linux (WSL2), Docker.
| Date | Paper Title | Links | Tags |
|---|---|---|---|
| 2025-12-30 | Attention Is All You Need | Transformer |
|
| 2025-12-30 | BERT: Pre-training of Deep Bidirectional Transformers | Encoder |
|
| 2026-01-08 | Language Models are Few-Shot Learners (GPT-3) | Decoder |
| Date | Paper Title | Links | Tags |
|---|---|---|---|
| 2026-01-09 | Chain-of-Thought Prompting Elicits Reasoning | CoT |
|
| 2026-01-10 | Self-Consistency Improves Chain of Thought Reasoning in Language Models | CoT-SC |
|
| 2026-01-13 | Tree of Thoughts: Deliberate Problem Solving with Large Language Models | ToT |
|
| 2026-01-14 | ReAct: Synergizing Reasoning and Acting in Language Models | Agent、tools_call |
| Date | Paper Title | Links | Tags |
|---|---|---|---|
| 2026-01-15 | LoRA: Low-Rank Adaptation of Large Language Models | LoRA,fine_tuning |
|
| 2026-01-16 | QLoRA: Efficient Finetuning of Quantized LLMs | QLoRA,Quantization,fine_tuning |
👉 Check my full notes: AI-Paper-Notes