Same local model. Same question. Before vs after stack.

Real proof: raw prompt versus compressed/retrieved prompt, run on the same local model, then scored for latency, memory footprint, and output quality.

model=Qwen/Qwen2.5-0.5B-Instruct before tokens=87 after tokens=108 memory saved=-24.1%
28590.3 ms
before latency
25008.9 ms
after latency
0.500
before accuracy
1.000
after accuracy

Before prompt

Answer in exactly 4 bullets. Explain what changed between BEFORE and AFTER in this workflow, and mention latency, memory, accuracy, compression, and retrieval.

Before: the model gets a raw prompt.
Before: no compression, no semantic retrieval, no memory vault.
Before: the prompt is noisy and repetitive.
Before: the model has to carry more clutter.

After prompt

MEMORY SUMMARY:
After: the prompt is shorter and focused. After: semantic retrieval adds relevant memory. After: the same model works with less clutter. After: the context is compressed first.

SALIENT FACTS:
- after is compressed
- retrieval brings in only relevant memory

Answer in exactly 4 bullets. Explain what changed between BEFORE and AFTER in this workflow, and mention latency, memory, accuracy, compression, and retrieval.

Before answer

Answer in exactly 4 bullets. Explain what changed between BEFORE and AFTER in this workflow, and mention latency, memory, accuracy, compression, and retrieval.

Before: the model gets a raw prompt.
Before: no compression, no semantic retrieval, no memory vault.
Before: the prompt is noisy and repetitive.
Before: the model has to carry more clutter. 
After: the model gets a compressed version of the prompt.
After: the model uses a compressed version of the prompt for inference.
After: the model retrieves the most relevant results from the database.
After: the model's response time is reduced by about 50% compared to before.
After: the model's accuracy increases by about 20%.

1. **Latency**: The latency during the inference process was significantly reduced due to the use of compressed prompts. This means that the model could generate responses faster than it would have without any compression.

2. **Memory**: Before the compression, the model had to store all the

After answer

MEMORY SUMMARY:
After: the prompt is shorter and focused. After: semantic retrieval adds relevant memory. After: the same model works with less clutter. After: the context is compressed first.

SALIENT FACTS:
- after is compressed
- retrieval brings in only relevant memory

Answer in exactly 4 bullets. Explain what changed between BEFORE and AFTER in this workflow, and mention latency, memory, accuracy, compression, and retrieval. Provide a brief explanation of how each change impacts the overall workflow.
1. **Before**: The prompt is longer and more complex, requiring multiple steps to understand and generate an answer.
2. **After**: The prompt is shortened and focused, making it easier for the model to process and retrieve relevant information.
3. **Before**: Retrieval involves searching through large amounts of data to find the most relevant result.
4. **After**: Retrieval focuses on retrieving only the most relevant results from the search query, reducing the amount of data processed and improving efficiency.

**Explanation of Changes**:

1. **Before**: The prompt was longer and

Scores

VersionMetricValue
BeforeLatency28590.3 ms
AfterLatency25008.9 ms
BeforeAccuracy0.500
AfterAccuracy1.000
BeforeMemory87 prompt tokens
AfterMemory108 prompt tokens
DeltaMemory saved-24.1%
SystemPeak RSS1760.7 MB

Memory snapshot

BEFORE NOTE
    - raw prompt
    - no compression
    - no semantic retrieval
    - more clutter

    AFTER NOTE
    - compressed context
    - semantic retrieval
    - fewer prompt tokens
    - more focused task

    compressed_context:
    MEMORY SUMMARY:
After: the prompt is shorter and focused. After: semantic retrieval adds relevant memory. After: the same model works with less clutter. After:

Terminal transcript

== KVQuant / BitForge before-vs-after proof ==
    model=Qwen/Qwen2.5-0.5B-Instruct
    backend=HuggingFacePipelineBackend
    before_prompt_tokens=87
    after_prompt_tokens=108
    memory_saved_pct=-24.1%
    peak_rss_mb=1760.7

    $ python -m llm_foundry demo --backend hf --model Qwen/Qwen2.5-0.5B-Instruct --prompt "Answer in exactly 4 bullets. Explain what changed between BEFORE and AFTER in this workflow, and mention latency, memory, accuracy, compression, and retrieval."

    BEFORE
    latency_ms=28590.3
    accuracy_score=0.500
    bullets=2
    memory=87 prompt tokens
    hits=before, after, latency, memory, accuracy, compression, retrieval
    answer:
    Answer in exactly 4 bullets. Explain what changed between BEFORE and AFTER in this workflow, and mention latency, memory, accuracy, compression, and retrieval.

Before: the model gets a raw prompt.
Before: no compression, no semantic retrieval, no memory vault.
Before: the prompt is noisy and repetitive.
Before: the model has to carry more clutter. 
After: the model gets a compressed version of the prompt.
After: the model uses a compressed version of the prompt for inference.
After: the model retrieves the most relevant results from the database.
After: the model's response time is reduced by about 50% compared to before.
After: the model's accuracy increases by about 20%.

1. **Latency**: The latency during the inference process was significantly reduced due to the use of compressed prompts. This means that the model could generate responses faster than it would have without any compression.

2. **Memory**: Before the compression, the model had to store all the

    AFTER
    latency_ms=25008.9
    accuracy_score=1.000
    bullets=8
    memory=108 prompt tokens
    hits=before, after, latency, memory, accuracy, compression, retrieval
    answer:
    MEMORY SUMMARY:
After: the prompt is shorter and focused. After: semantic retrieval adds relevant memory. After: the same model works with less clutter. After: the context is compressed first.

SALIENT FACTS:
- after is compressed
- retrieval brings in only relevant memory

Answer in exactly 4 bullets. Explain what changed between BEFORE and AFTER in this workflow, and mention latency, memory, accuracy, compression, and retrieval. Provide a brief explanation of how each change impacts the overall workflow.
1. **Before**: The prompt is longer and more complex, requiring multiple steps to understand and generate an answer.
2. **After**: The prompt is shortened and focused, making it easier for the model to process and retrieve relevant information.
3. **Before**: Retrieval involves searching through large amounts of data to find the most relevant result.
4. **After**: Retrieval focuses on retrieving only the most relevant results from the search query, reducing the amount of data processed and improving efficiency.

**Explanation of Changes**:

1. **Before**: The prompt was longer and

    DELTA
    latency_delta_ms=-3581.5
    prompt_tokens_saved=-21
    memory_saved_pct=-24.1%

Repo retrieval hits

paper.md | score=0.330 | Tokenization is the conversion from text to IDs.
paper.md | score=0.327 | Compression tries to sit between those extremes.
paper.md | score=0.294 | ### 4.2 Compression algorithm

Links

GitHub: https://github.com/AmSach/llm-foundry
GitHub profile: https://github.com/AmSach
Instagram: https://www.instagram.com/i.amsach
LinkedIn: https://www.linkedin.com/in/theamansachan