Skip to content
View MengzhongRe's full-sized avatar
  • Sun Yat-sen University
  • Guangzhou | China
  • 04:34 (UTC +08:00)

Block or report MengzhongRe

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
MengzhongRe/README.md

Hi there, I'm Meng Yi (孟毅) 👋


🚀 About Me / 关于我

I am a Master's student in Logic at Sun Yat-sen University, exploring Neuro-Symbolic AI. 我是中山大学逻辑学硕士,致力于探索逻辑推理与深度学习的结合。

  • 🔭 Current Focus: LLM Reasoning (CoT/ToT), Parameter-Efficient Fine-Tuning (QLoRA).
    (主攻方向:大模型推理链、参数高效微调)
  • 🧠 Research Interest: AI Alignment & Logical Consistency.
    (研究兴趣:AI对齐、逻辑一致性、反事实推理)
  • 🛠️ Tech Stack: PyTorch, Hugging Face, Linux (WSL2), Docker.

🛠️ Tech Stack & Tools


📚 Paper Reading / 论文复现笔记

Phase 1: The Foundation (BERT & GPT)

Date Paper Title Links Tags
2025-12-30 Attention Is All You Need PDF Transformer
2025-12-30 BERT: Pre-training of Deep Bidirectional Transformers PDF Encoder
2026-01-08 Language Models are Few-Shot Learners (GPT-3) PDF Decoder

Phase 2: Reasoning & Agents

Date Paper Title Links Tags
2026-01-09 Chain-of-Thought Prompting Elicits Reasoning PDF CoT
2026-01-10 Self-Consistency Improves Chain of Thought Reasoning in Language Models PDF CoT-SC
2026-01-13 Tree of Thoughts: Deliberate Problem Solving with Large Language Models PDF ToT
2026-01-14 ReAct: Synergizing Reasoning and Acting in Language Models PDF Agent、tools_call

Phase 3: Fine_tunning Techniques (In Progress)

Date Paper Title Links Tags
2026-01-15 LoRA: Low-Rank Adaptation of Large Language Models PDF LoRA,fine_tuning
2026-01-16 QLoRA: Efficient Finetuning of Quantized LLMs PDF QLoRA,Quantization,fine_tuning

👉 Check my full notes: AI-Paper-Notes


Pinned Loading

  1. LLM-Mechanics-From-Scratch LLM-Mechanics-From-Scratch Public

    从零手撕大模型整个生命周期的核心算子(如tokenization、RMSNorm、RoPE和LoRA),附带对应论文的阅读详解和笔记整理!!

    Python 1

  2. algorithm_and_datastructure algorithm_and_datastructure Public

    力扣算法与数据结构题目解题源代码汇总、各题型解题思路总结!

    Python 1

  3. bert-logic-stress bert-logic-stress Public

    基于transformers的NLP模型具有较强的自然语言理解能力,尤其是在sentiment-analysis领域中已经做到极高的准确率。但是本项目在测试基于bert微调的中文情感分析模型在类似于"双重否定”、“反讽”等逻辑复杂的句式中易犯错,尽管这些句式的真实情感对于人类而言很容易区分。项目表明,当今的language model仍然基于概率来理解语言,而非其真正的语义关联。

    Python 3