GenAI — Build LLM Streaming in Angular UI with FastAPI Backend

VerticalServe Blogs
3 min readJul 24, 2024

--

In this blog post, we will explore how to implement Server-Sent Events (SSE) streaming using Angular 16 on the frontend, Python FastAPI on the backend, and integrating LangChain LLM streams. We will also cover testing the implementation using Postman.

Setting Up the Backend with FastAPI

First, let’s set up a FastAPI server to handle SSE streams. We’ll use the sse-starlette extension to simplify the SSE implementation.

Install Dependencies:

pip install fastapi uvicorn asyncio

Integrating LangChain LLM Streams

Assuming you have LangChain set up to interact with an LLM, we can incorporate it into our FastAPI application to stream responses.Modify the FastAPI Application:

# main.py
import os
import openai
from fastapi.responses import StreamingResponse

openai.api_key = os.getenv("OPENAI_API_KEY")

def generate_response(query: str):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": query}],
temperature=0.0,
stream=True,
)
for line in response:
yield line["choices"][0]["delta"]["content"]

@app.post("/query")
async def query_handler(query: str):
return StreamingResponse(generate_response(query), media_type="text/event-stream")

Setting Up the Frontend with Angular 16

Next, we will set up an Angular 16 application to consume the SSE stream.

Install Angular CLI and Create a New Project:

npm install -g @angular/cli
ng new sse-angular
cd sse-angular

Create a Service to Handle SSE (Using Simple EventSource):

// src/app/sse.service.ts
import { Injectable } from '@angular/core';

@Injectable({
providedIn: 'root'
})
export class SseService {
constructor() { }

getServerSentEvent(url: string): EventSource {
return new EventSource(url);
}
}

Use the Service in a Component:

// src/app/app.component.ts
import { Component, OnInit } from '@angular/core';
import { SseService } from './sse.service';

@Component({
selector: 'app-root',
template: `<div *ngFor="let message of messages">{{ message }}</div>`,
styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {
messages: string[] = [];

constructor(private sseService: SseService) {}

ngOnInit(): void {
const eventSource = this.sseService.getServerSentEvent('http://localhost:8000/stream');
eventSource.onmessage = (event) => {
this.messages.push(event.data);
};
}
}

Using sse.js — For POST Requests

We want to send payload to a backend service using POST and EventSource doesn’t support it. We can use sse.js package to handle POST.

npm i sse.js --legacy-peer-deps

Backend Code

@router.post("/stream")
async def stream_endpoint(request_data: query):
# Process the request and stream the response
model = get_model(request_data.model).get_llm()

async def generate_response():
async for chunk in model.astream(query):
print(chunk.content)
yield f"event: chunk\ndata: {chunk.content}\n\n"
return StreamingResponse(generate_response(), media_type='text/event-stream')

Angular Code

import { SSE } from 'sse.js';

runQuery(query: any)
{
let input = {"query": query}
let that = this;
const eventSource = new SSE('http://localhost:8000/endpoints/stream', {headers: {'Content-Type': 'application/json'},
payload: JSON.stringify(input)});
eventSource.addEventListener('chunk', function(e: any) {
that.content += e.data
}

Testing with Postman

To test the SSE endpoint using Postman:

  1. Open Postman and create a new request.
  2. Set the request type to GET and enter the URL http://localhost:8000/stream.
  3. Click on Send and observe the streamed messages in the response section.

For the POST request to /query, you can set the body to contain the query string and observe the streamed responses.

Conclusion

In this post, we have demonstrated how to set up SSE streaming using Angular 16 and Python FastAPI, including integrating LangChain LLM streams and testing with Postman. This setup allows for real-time communication between the server and client, making it ideal for applications requiring live updates.

About — The GenAI POD — GenAI Experts

GenAIPOD is a specialized consulting team of VerticalServe, helping clients with GenAI Architecture, Implementations etc.

VerticalServe Inc — Niche Cloud, Data & AI/ML Premier Consulting Company, Partnered with Google Cloud, Confluent, AWS, Azure…50+ Customers and many success stories..

Website: http://www.VerticalServe.com

Contact: contact@verticalserve.com

--

--

No responses yet