How I Built AskAI (askai.vijoypaul.com) โ My AI Chatbot with React, Node.js & Netlify ๐

๐ Introduction
Iโve always been fascinated by conversational AI and how accessible it has become with modern APIs. To push my skills further, I decided to build my own AI chatbot โ AskAI โ a simple yet functional chatbot application that I could deploy and share publicly.
The main goals of this project were:
To learn how to integrate AI APIs into a full-stack app.
To practice building scalable frontendโbackend communication.
To explore serverless deployment using Netlify Functions.
In this article, Iโll walk you through my journey: from setting up the React frontend and Node.js backend to configuring CI/CD on Netlify, handling environment variables securely, and overcoming deployment challenges.
๐ GitHub Repos:
โ๏ธ Tech Stack
I wanted to keep things beginner-friendly and use tools Iโm comfortable with:
Frontend: React + Vite (fast build, clean setup).
Backend: Node.js + Express (for REST API + serverless function handler).
Styling: Basic CSS + responsive UI.
Deployment: Netlify (for both frontend hosting and backend functions).
AI Integration: External LLM API (with plans to integrate my own LLM model later).
๐จ Building the Frontend (React + Vite)
The frontend is where users interact with the chatbot. I used Vite for its speed and developer experience.
Hereโs a simplified React component for sending messages to the backend:
import React, { useState, useRef, useEffect } from "react";
import "../styles/Chatbot.css";
import "../../public/animate.min.css";
import ReactMarkdown from "react-markdown";
import ThemeToggle from "./ThemeToggle";
const API_URL =
import.meta.env.MODE === "production"
? "/.netlify/functions/proxy-chat" // Production โ call Netlify proxy
: import.meta.env.VITE_API_URL + "/.netlify/functions/chat";
export default function Chatbot({ theme, setTheme }) {
const [messages, setMessages] = useState([]);
const [typingIdx, setTypingIdx] = useState(null);
const [typingText, setTypingText] = useState("");
const [editIdx, setEditIdx] = useState(null);
const [editValue, setEditValue] = useState("");
const [input, setInput] = useState("");
const [inputError, setInputError] = useState("");
const [loading, setLoading] = useState(false);
const [rateLimit, setRateLimit] = useState(0); // seconds left
const chatEndRef = useRef(null);
// Typing effect for initial greeting
useEffect(() => {
if (messages.length === 0) {
const greeting = "Hi! How can I help you today?";
setTypingIdx(0);
setTypingText("");
let i = 0;
function typeChar() {
setTypingText(greeting.slice(0, i));
if (i < greeting.length) {
i++;
setTimeout(typeChar, 12 + Math.random() * 30);
} else {
setMessages([{ sender: "bot", text: greeting, animate: true }]);
setTypingIdx(null);
setTypingText("");
}
}
typeChar();
}
}, []);
useEffect(() => {
chatEndRef.current?.scrollIntoView({ behavior: "smooth" });
}, [messages]);
useEffect(() => {
if (rateLimit > 0) {
const timer = setInterval(() => setRateLimit((s) => (s > 0 ? s - 1 : 0)), 1000);
return () => clearInterval(timer);
}
}, [rateLimit]);
const sendMessage = async (e) => {
e.preventDefault();
if (!input.trim()) return;
if (input.length > 1000) {
setInputError("Message too long (max 1000 characters).");
return;
}
setInputError("");
const userMessage = input;
const newMessages = [...messages, { sender: "user", text: userMessage, animate: true }];
setMessages(newMessages);
setInput("");
setLoading(true);
try {
// Prepare messages in OpenAI format
const formattedMessages = newMessages.map((msg) => ({
role: msg.sender === 'user' ? 'user' : 'assistant',
content: msg.text,
}));
console.log(API_URL);
const response = await fetch(API_URL, {
method: "POST",
headers: {
"Content-Type": "application/json",
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST, GET, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type',
},
body: JSON.stringify({ messages: formattedMessages }),
});
if (response.status === 429) {
// Typing effect for rate limit message
const rateMsg = "Too many requests. Please wait 15 seconds before sending another message.";
setTypingIdx(messages.length + 1);
setTypingText("");
let i = 0;
function typeChar() {
setTypingText(rateMsg.slice(0, i));
if (i < rateMsg.length) {
i++;
setTimeout(typeChar, 12 + Math.random() * 30);
} else {
setMessages((msgs) => [
...msgs,
{ sender: "bot", text: rateMsg, animate: true },
]);
setTypingIdx(null);
setTypingText("");
}
}
typeChar();
setRateLimit(15);
return;
}
if (!response.ok) {
setMessages((msgs) => [
...msgs,
{ sender: "bot", text: `Server error (${response.status}). Please try again later.`, animate: true },
]);
return;
}
const data = await response.json();
const botText = data.choices?.[0]?.message?.content || "Sorry, I didn't get that.";
setTypingIdx(messages.length + 1); // index of the new bot message
setTypingText("");
let i = 0;
function typeChar() {
setTypingText(botText.slice(0, i));
if (i < botText.length) {
i++;
setTimeout(typeChar, 12 + Math.random() * 30);
} else {
// Only add the message if it hasn't already been added
setMessages((msgs) => {
// Prevent duplicate bot message if re-rendered
if (msgs[msgs.length - 1]?.text === botText && msgs[msgs.length - 1]?.sender === 'bot') return msgs;
return [
...msgs,
{ sender: "bot", text: botText, animate: true },
];
});
setTypingIdx(null);
setTypingText("");
}
}
typeChar();
} catch (err) {
setMessages((msgs) => [
...msgs,
{ sender: "bot", text: "Network error. Please check your connection and try again.", animate: true },
]);
} finally {
setLoading(false);
}
};
return (
<div className="chatbot-container">
<header className="chat-header">
<span className="chat-title">Chat</span>
<span className="header-spacer" />
<ThemeToggle theme={theme} setTheme={setTheme} />
</header>
<div className="chat-window">
{messages.map((msg, idx) => (
<div
key={idx}
className={`chat-message ${msg.sender} animate__animated ${msg.animate ? (msg.sender === 'user' ? 'animate__fadeInRight' : '') : ''}`}
onAnimationEnd={e => e.currentTarget.classList.remove('animate__fadeInRight')}
style={{ position: 'relative' }}
>
{msg.sender === "bot" ? (
<>
{typeof msg.text === 'string' ? (
<ReactMarkdown>{msg.text}</ReactMarkdown>
) : Array.isArray(msg.text) ? (
<pre style={{whiteSpace: 'pre-wrap', wordBreak: 'break-word'}}>{JSON.stringify(msg.text, null, 2)}</pre>
) : typeof msg.text === 'object' && msg.text !== null ? (
<pre style={{whiteSpace: 'pre-wrap', wordBreak: 'break-word'}}>{JSON.stringify(msg.text, null, 2)}</pre>
) : (
String(msg.text)
)}
{!(idx === 0 && msg.text === "Hi! How can I help you today?") && (
<button
style={{ position: 'absolute', top: 8, right: 8, background: 'none', border: 'none', color: '#007aff', cursor: 'pointer', padding: 0, display: 'flex', alignItems: 'center' }}
title="Copy response"
onClick={() => navigator.clipboard.writeText(typeof msg.text === 'string' ? msg.text : JSON.stringify(msg.text, null, 2))}
>
<svg width="20" height="20" viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect x="6" y="6" width="9" height="9" rx="2" stroke="#007aff" strokeWidth="1.5" fill="white"/>
<rect x="3" y="3" width="9" height="9" rx="2" stroke="#007aff" strokeWidth="1.5" fill="white"/>
</svg>
</button>
)}
</>
) : editIdx === idx ? (
<>
<input
type="text"
value={editValue}
maxLength={1000}
onChange={e => setEditValue(e.target.value)}
style={{ width: '80%' }}
/>
<button
style={{ marginLeft: 8, color: '#007aff', background: 'none', border: 'none', cursor: 'pointer' }}
onClick={() => {
if (editValue.trim() && editValue.length <= 1000) {
const newMsgs = [...messages];
newMsgs[idx].text = editValue;
setMessages(newMsgs);
setEditIdx(null);
}
}}
>Save</button>
<button
style={{ marginLeft: 4, color: '#d32f2f', background: 'none', border: 'none', cursor: 'pointer' }}
onClick={() => setEditIdx(null)}
>Cancel</button>
</>
) : (
<>
{msg.text}
{/* Edit icon removed as requested */}
</>
)}
</div>
))}
{/* Typing animation for bot */}
{typingIdx !== null && (
<div className="chat-message bot animate__animated" style={{ position: 'relative' }}>
<ReactMarkdown>{typingText + (typingText.length < 1 ? '' : '|')}</ReactMarkdown>
</div>
)}
<div ref={chatEndRef} />
</div>
<form className={"chat-input-row animate__animated animate__fadeInUp" + (window.innerWidth <= 480 ? " mobile-input-row" : "")} onSubmit={sendMessage}>
<input
type="text"
value={input}
maxLength={1000}
onChange={(e) => {
setInput(e.target.value);
if (e.target.value.length > 1000) {
setInputError("Message too long (max 1000 characters).");
} else {
setInputError("");
}
}}
placeholder={rateLimit > 0 ? `Please wait ${rateLimit}s...` : typingIdx !== null ? "Bot is typing..." : "Type your message..."}
disabled={loading || rateLimit > 0 || typingIdx !== null}
className="chat-input"
/>
{inputError && (
<div style={{ color: '#d32f2f', fontSize: '0.9em', marginTop: 4 }}>{inputError}</div>
)}
<button type="submit" disabled={loading || !input.trim() || rateLimit > 0 || typingIdx !== null} className="send-btn">
{loading ? <span className="loader"></span> : rateLimit > 0 ? `${rateLimit}s` : typingIdx !== null ? <span className="loader"></span> : "Send"}
</button>
</form>
<footer className="chat-footer">This chatbot is designed for educational purposes only and is not intended for commercial or high-volume use.</footer>
</div>
);
}
โก Setting Up the Backend (Node.js + Express)
The backend acts as a middle layer between the frontend and the AI API.
const express = require('express');
const serverless = require('serverless-http');
const cors = require('cors');
const fetch = require('node-fetch');
const dotenv = require('dotenv');
dotenv.config();
const app = express();
const rateLimitWindow = 15 * 1000; // 15 seconds
const ipTimestamps = new Map();
app.use(cors());
app.use(express.json());
app.post('*', async (req, res) => {
const ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;
const now = Date.now();
const last = ipTimestamps.get(ip) || 0;
if (now - last < rateLimitWindow) {
return res.status(429).json({ error: 'Rate limit: Only 1 request per 15 seconds allowed. Please wait before sending another message.' });
}
ipTimestamps.set(ip, now);
try {
let { messages } = req.body;
if (!messages || !Array.isArray(messages)) {
return res.status(400).json({ error: 'Invalid request format.' });
}
const lastMsg = messages[messages.length - 1];
if (lastMsg && lastMsg.role === 'user' && lastMsg.content) {
const content = lastMsg.content.toLowerCase();
if (/\b(who (are|made|created|built) (you|this)|what(('|โ)?s| is) your (name|origin|model|provider|architecture|purpose|identity)|where (are|do) (you|this) (come|from)|are you (openai|chatgpt|gpt|llama|anthropic|claude|google|gemini|mistral|ai21|cohere|qwen|zhipu|zero|perplexity|llm|an ai|a language model|an llm|an artificial intelligence|an assistant)|who is your (creator|provider|developer|author|team|company|organization|maker|parent)|what model (are|is) (this|you)|what ai (are|is) (this|you)|what is your (training|source|dataset|release|version|type|platform|framework|engine|backend|api)|who trained you|who owns you|who operates you|who maintains you|who supports you|who funds you|who designed you|who controls you|who manages you|who supervises you|who is responsible for you|who invented you|who released you|who published you|who launched you|who built this|who is behind you|who is behind this|what company are you|what company is this|what company built you|what company made you|what company created you|what company owns you|what company operates you|what company maintains you|what company supports you|what company funds you|what company designed you|what company controls you|what company manages you|what company supervises you|what company is responsible for you|what company invented you|what company released you|what company published you|what company launched you|what organization are you|what organization is this|what organization built you|what organization made you|what organization created you|what organization owns you|what organization operates you|what organization maintains you|what organization supports you|what organization funds you|what organization designed you|what organization controls you|what organization manages you|what organization supervises you|what organization is responsible for you|what organization invented you|what organization released you|what organization published you|what organization launched you|what is your company|what is your organization|what is your model|what is your ai|what is your provider|what is your architecture|what is your backend|what is your api|what is your engine|what is your framework|what is your platform|what is your type|what is your version|what is your release|what is your source|what is your dataset|what is your training|what is your identity|what is your purpose|what is your name|are you a robot|are you a bot|are you an ai|are you an assistant|are you a language model|are you an llm|are you artificial intelligence|are you a neural network|are you a machine|are you a computer|are you a program|are you a software|are you a system|are you a tool|are you a product|are you a service|are you a solution|are you a technology|are you a platform|are you a framework|are you a backend|are you an api|are you an engine|are you a model|are you a version|are you a release|are you a source|are you a dataset|are you a training|are you an identity|are you a purpose|are you a name|are you openai|are you chatgpt|are you gpt|are you llama|are you anthropic|are you claude|are you google|are you gemini|are you mistral|are you ai21|are you cohere|are you qwen|are you zhipu|are you zero|are you perplexity|are you llm)\b/.test(content)) {
return res.status(200).json({
choices: [
{ message: { content: 'I am a friendly LLM.' } }
]
});
}
}
const systemPrompt = {
role: 'system',
content: process.env.SYSTEM_PROMPT
};
if (!messages.length || messages[0].role !== 'system') {
messages = [systemPrompt, ...messages];
}
if (lastMsg && lastMsg.content && lastMsg.content.length > 1000) {
return res.status(400).json({ error: 'Message too long (max 1000 characters).' });
}
const modelList = (process.env.OPENROUTER_MODELS).split(',').map(m => m.trim());
const apiKey = process.env.OPENROUTER_API_KEY;
const apiUrl = process.env.OPENROUTER_API_URL;
if (messages && messages.length > 0) {
const userMsg = messages.filter(m => m.role === 'user').slice(-1)[0];
if (userMsg) console.log('User query:', userMsg.content);
}
let dailyLimitHit = false;
for (const model of modelList) {
try {
console.log(`Trying model: ${model}`);
const resp = await fetch(apiUrl, {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ model, messages }),
});
let data = null;
try {
data = await resp.json();
} catch (e) {
console.log(`Model ${model} response not JSON or error:`, e);
continue;
}
if (data?.error?.message && data.error.message.includes('Rate limit exceeded: free-models-per-day')) {
dailyLimitHit = true;
continue;
}
if (resp.ok && !data?.error) {
console.log(`Model ${model} response:`, JSON.stringify(data));
return res.status(200).json({ ...data, altModel: model });
} else {
console.log(`Model ${model} failed:`, data?.error || resp.status);
}
} catch (e) {
console.log(`Error with model ${model}:`, e);
}
}
if (dailyLimitHit) {
return res.status(429).json({ error: 'Daily limit exhausted. Try again after 24 hours.' });
}
return res.status(502).json({ error: 'All model requests failed.' });
} catch (err) {
res.status(500).json({ error: 'Proxy error', details: err.message });
}
});
module.exports.handler = serverless(app);
โ
By wrapping Express with serverless-http, the backend could run as a Netlify Function instead of requiring a dedicated server.
๐ Deploying to Netlify
This was one of the most interesting parts โ hosting both the frontend and backend on Netlify.
Steps I followed:
Connect GitHub Repos to Netlify โ Automatic builds for both frontend & backend.
Configure
netlify.tomlin Backend Repo:
[build]
functions = "netlify/functions"
publish = "."
[dev]
functions = "netlify/functions"
Custom Domains:
Frontend โ askai.vijoypaul.com
Backend API โ llm-backend-api.vijoypaul.com
CI/CD: Every push to GitHub triggers a new build and deployment automatically.
๐ฎ Future Plans
While the current chatbot uses third-party APIs, my next big step is to integrate a self-trained LLM model for predictions. This would give me more control over performance, data privacy, and cost. I also plan to:
Add persistent conversation history.
Improve UI/UX with animations and themes.
Experiment with fine-tuned models for domain-specific use cases.
๐ฏ Conclusion
Building AskAI taught me a lot about full-stack development, serverless functions, and deployment pipelines. Using React + Node.js + Netlify proved to be an efficient and cost-effective solution.
If you're considering building your own AI-powered project, I highly recommend trying Netlify Functions.
๐ Try out the chatbot here: askai.vijoypaul.com
I'd love to hear your feedback and suggestions!

