• Potentials of Multitenancy Fine-Tuned LLM Serving

    As open-source pre-trained Large Language Models (LLMs) become more powerful and permissive, more and more users are incorporating LLMs into their projects. An essential adaptation step is the integration of domain-specific documents into the pre-trained model, known as fine-tuning.

    Often, the additional knowledge from domain-specific documents is minuscule compared to what the pre-trained model already knows. In such scenarios, the Low-Rank Adaptation (LoRA) technique proves valuable.

    With LoRA, a fine-tuned model adds fewer than 0.1% of parameters to the pre-trained model. In concrete terms, this means a LoRA fine-tuned model increases storage by only 10~200 MB, depending on the configuration. From a computational standpoint, given the marginal increase in parameters compared to the pre-trained model, the additional computational load is relatively small.

    Considering the minimal storage addition and computational overhead, I believe there’s potential in developing a multitenancy fine-tuned LLM serving service. This service could host thousands of LoRA models, all sharing the same backbone LLM. With batching, each user request would invoke a distinct fine-tuned model, thereby amortizing storage and computational costs across various models.

    In my previous blog post, I delved into the batching effects in LLM serving. In this post, I’ll detail why multitenancy LoRA serving has immense potential.

    Read on →

  • Dissecting Batching Effects in GPT Inference

    Machine learning models relying on batching to improve inference throughput, especially for smaller computer vision models such as ResNet and DenseNet. GPT, as well as other large language models (LLMs), is the hottest model these days. Does batching still apply to GPT and LLMs? Let’s find out.

    Read on →

  • How we discovered why C++ exceptions disappear in stack trace

    Every time my code crashes, I rely on core dump files to find out where the crash happened. (See my previous post on how to produce and use a core dump file.) My debugging life has been happy thanks to this approach, until this time. When I loaded GDB with the core dump, I’m so disappointed that all the stack trace is about some system library, none about mine.

    TLDR: Check this patch.

    Let’s start the journey.

    Read on →

  • Coq Tricks for Beginners with Too Many Examples

    The first time I heard Coq was in college. Since then I always wanted to learn Coq but didn’t know how to. Lots ofhypotheses the tutorials are about logic, which doesn’t seem interesting to me.

    I took UW CSE505 last quarter. Prof. Zach Tatlock and the TA Talia Ringer made some wonderful homework to help us learn Coq. While in this course, we proved more practical things, like a programming language interpreter and a regular expression matcher. I actually found myself feeling quite good about proving things in Coq (writing specifications is a completely different other thing), at least when doing homework.

    If you are teaching yourself Coq, I’d suggest the following resources:

    • Homework of UW CSE505
    • Formal Reasoning About Programs is the textbook we used in class. It’s a short textbook with big margins and lots of whitespaces. It comes with detailed Coq code for each chapter and a very powerful Coq library frap.
    • Software Foundations is the material for you if you are really into verification.

    I’m quite happy that I completed Coq in my wish list. But I’m not a huge believer of machine-checked proof or verification. Before I fully forgot Coq, let me write down some beginner-level tricks I learned when doing homework, just in case you might find it useful or I come back to Coq some decades later.

    Read on →

  • Damn Thread, Don't be such a dog in the manger

    Take a close look at the following code:

    Read on →