<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Vijoy Paul's Blog]]></title><description><![CDATA[Vijoy Paul's Blog]]></description><link>https://blog.vijoypaul.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 06:57:20 GMT</lastBuildDate><atom:link href="https://blog.vijoypaul.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How I Built AskAI (askai.vijoypaul.com) – My AI Chatbot with React, Node.js & Netlify 🚀]]></title><description><![CDATA[📝 Introduction
I’ve always been fascinated by conversational AI and how accessible it has become with modern APIs. To push my skills further, I decided to build my own AI chatbot — AskAI — a simple yet functional chatbot application that I could dep...]]></description><link>https://blog.vijoypaul.com/how-i-built-askai-askaivijoypaulcom-my-ai-chatbot-with-react-nodejs-and-netlify</link><guid isPermaLink="true">https://blog.vijoypaul.com/how-i-built-askai-askaivijoypaulcom-my-ai-chatbot-with-react-nodejs-and-netlify</guid><category><![CDATA[llm]]></category><category><![CDATA[React]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[Netlify]]></category><category><![CDATA[openai]]></category><dc:creator><![CDATA[Vijoy Paul]]></dc:creator><pubDate>Wed, 27 Aug 2025 09:34:45 GMT</pubDate><content:encoded><![CDATA[<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756286262794/9065f3f4-8541-46dc-87ad-9a18803f838c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-introduction">📝 Introduction</h2>
<p>I’ve always been fascinated by conversational AI and how accessible it has become with modern APIs. To push my skills further, I decided to build my own AI chatbot — <a target="_blank" href="https://askai.vijoypaul.com/"><strong>AskAI</strong></a> — a simple yet functional chatbot application that I could deploy and share publicly.</p>
<p>The main goals of this project were:</p>
<ul>
<li><p>To learn how to integrate AI APIs into a full-stack app.</p>
</li>
<li><p>To practice building scalable frontend–backend communication.</p>
</li>
<li><p>To explore serverless deployment using <strong>Netlify Functions</strong>.</p>
</li>
</ul>
<p>In this article, I’ll walk you through my journey: from setting up the React frontend and Node.js backend to configuring CI/CD on Netlify, handling environment variables securely, and overcoming deployment challenges.</p>
<p>👉 GitHub Repos:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/vijoy-paul/llm-chat-frontend">Frontend (React)</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/vijoy-paul/llm-chat-backend">Backend (Node.js/Express)</a></p>
</li>
</ul>
<hr />
<h2 id="heading-tech-stack">⚙️ Tech Stack</h2>
<p>I wanted to keep things beginner-friendly and use tools I’m comfortable with:</p>
<ul>
<li><p><strong>Frontend</strong>: React + Vite (fast build, clean setup).</p>
</li>
<li><p><strong>Backend</strong>: Node.js + Express (for REST API + serverless function handler).</p>
</li>
<li><p><strong>Styling</strong>: Basic CSS + responsive UI.</p>
</li>
<li><p><strong>Deployment</strong>: Netlify (for both frontend hosting and backend functions).</p>
</li>
<li><p><strong>AI Integration</strong>: External LLM API (with plans to integrate my own LLM model later).</p>
</li>
</ul>
<hr />
<h2 id="heading-building-the-frontend-react-vite">🎨 Building the Frontend (React + Vite)</h2>
<p>The frontend is where users interact with the chatbot. I used <strong>Vite</strong> for its speed and developer experience.</p>
<p>Here’s a simplified React component for sending messages to the backend:</p>
<pre><code class="lang-tsx">
import React, { useState, useRef, useEffect } from "react";
import "../styles/Chatbot.css";
import "../../public/animate.min.css";
import ReactMarkdown from "react-markdown";
import ThemeToggle from "./ThemeToggle";


const API_URL =
  import.meta.env.MODE === "production"
    ? "/.netlify/functions/proxy-chat" // Production → call Netlify proxy
    : import.meta.env.VITE_API_URL + "/.netlify/functions/chat"; 

export default function Chatbot({ theme, setTheme }) {
  const [messages, setMessages] = useState([]);
  const [typingIdx, setTypingIdx] = useState(null);
  const [typingText, setTypingText] = useState("");
  const [editIdx, setEditIdx] = useState(null);
  const [editValue, setEditValue] = useState("");
  const [input, setInput] = useState("");
  const [inputError, setInputError] = useState("");
  const [loading, setLoading] = useState(false);
  const [rateLimit, setRateLimit] = useState(0); // seconds left
  const chatEndRef = useRef(null);

  // Typing effect for initial greeting
  useEffect(() =&gt; {
    if (messages.length === 0) {
      const greeting = "Hi! How can I help you today?";
      setTypingIdx(0);
      setTypingText("");
      let i = 0;
      function typeChar() {
        setTypingText(greeting.slice(0, i));
        if (i &lt; greeting.length) {
          i++;
          setTimeout(typeChar, 12 + Math.random() * 30);
        } else {
          setMessages([{ sender: "bot", text: greeting, animate: true }]);
          setTypingIdx(null);
          setTypingText("");
        }
      }
      typeChar();
    }
  }, []);

  useEffect(() =&gt; {
    chatEndRef.current?.scrollIntoView({ behavior: "smooth" });
  }, [messages]);

  useEffect(() =&gt; {
    if (rateLimit &gt; 0) {
      const timer = setInterval(() =&gt; setRateLimit((s) =&gt; (s &gt; 0 ? s - 1 : 0)), 1000);
      return () =&gt; clearInterval(timer);
    }
  }, [rateLimit]);

  const sendMessage = async (e) =&gt; {
    e.preventDefault();
    if (!input.trim()) return;
    if (input.length &gt; 1000) {
      setInputError("Message too long (max 1000 characters).");
      return;
    }
    setInputError("");
    const userMessage = input;
    const newMessages = [...messages, { sender: "user", text: userMessage, animate: true }];
    setMessages(newMessages);
    setInput("");
    setLoading(true);
    try {
      // Prepare messages in OpenAI format
      const formattedMessages = newMessages.map((msg) =&gt; ({
        role: msg.sender === 'user' ? 'user' : 'assistant',
        content: msg.text,
      }));
      console.log(API_URL);
      const response = await fetch(API_URL, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
          'Access-Control-Allow-Origin': '*',
          'Access-Control-Allow-Methods': 'POST, GET, OPTIONS',
          'Access-Control-Allow-Headers': 'Content-Type',
        },
        body: JSON.stringify({ messages: formattedMessages }),
      });
      if (response.status === 429) {
        // Typing effect for rate limit message
        const rateMsg = "Too many requests. Please wait 15 seconds before sending another message.";
        setTypingIdx(messages.length + 1);
        setTypingText("");
        let i = 0;
        function typeChar() {
          setTypingText(rateMsg.slice(0, i));
          if (i &lt; rateMsg.length) {
            i++;
            setTimeout(typeChar, 12 + Math.random() * 30);
          } else {
            setMessages((msgs) =&gt; [
              ...msgs,
              { sender: "bot", text: rateMsg, animate: true },
            ]);
            setTypingIdx(null);
            setTypingText("");
          }
        }
        typeChar();
        setRateLimit(15);
        return;
      }
      if (!response.ok) {
        setMessages((msgs) =&gt; [
          ...msgs,
          { sender: "bot", text: `Server error (${response.status}). Please try again later.`, animate: true },
        ]);
        return;
      }
      const data = await response.json();
      const botText = data.choices?.[0]?.message?.content || "Sorry, I didn't get that.";
      setTypingIdx(messages.length + 1); // index of the new bot message
      setTypingText("");
      let i = 0;
      function typeChar() {
        setTypingText(botText.slice(0, i));
        if (i &lt; botText.length) {
          i++;
          setTimeout(typeChar, 12 + Math.random() * 30);
        } else {
          // Only add the message if it hasn't already been added
          setMessages((msgs) =&gt; {
            // Prevent duplicate bot message if re-rendered
            if (msgs[msgs.length - 1]?.text === botText &amp;&amp; msgs[msgs.length - 1]?.sender === 'bot') return msgs;
            return [
              ...msgs,
              { sender: "bot", text: botText, animate: true },
            ];
          });
          setTypingIdx(null);
          setTypingText("");
        }
      }
      typeChar();
    } catch (err) {
      setMessages((msgs) =&gt; [
        ...msgs,
        { sender: "bot", text: "Network error. Please check your connection and try again.", animate: true },
      ]);
    } finally {
      setLoading(false);
    }
  };

  return (
  &lt;div className="chatbot-container"&gt;
      &lt;header className="chat-header"&gt;
        &lt;span className="chat-title"&gt;Chat&lt;/span&gt;
        &lt;span className="header-spacer" /&gt;
        &lt;ThemeToggle theme={theme} setTheme={setTheme} /&gt;
      &lt;/header&gt;
      &lt;div className="chat-window"&gt;
        {messages.map((msg, idx) =&gt; (
          &lt;div
            key={idx}
            className={`chat-message ${msg.sender} animate__animated ${msg.animate ? (msg.sender === 'user' ? 'animate__fadeInRight' : '') : ''}`}
            onAnimationEnd={e =&gt; e.currentTarget.classList.remove('animate__fadeInRight')}
            style={{ position: 'relative' }}
          &gt;
            {msg.sender === "bot" ? (
              &lt;&gt;
                {typeof msg.text === 'string' ? (
                  &lt;ReactMarkdown&gt;{msg.text}&lt;/ReactMarkdown&gt;
                ) : Array.isArray(msg.text) ? (
                  &lt;pre style={{whiteSpace: 'pre-wrap', wordBreak: 'break-word'}}&gt;{JSON.stringify(msg.text, null, 2)}&lt;/pre&gt;
                ) : typeof msg.text === 'object' &amp;&amp; msg.text !== null ? (
                  &lt;pre style={{whiteSpace: 'pre-wrap', wordBreak: 'break-word'}}&gt;{JSON.stringify(msg.text, null, 2)}&lt;/pre&gt;
                ) : (
                  String(msg.text)
                )}
                {!(idx === 0 &amp;&amp; msg.text === "Hi! How can I help you today?") &amp;&amp; (
                  &lt;button
                    style={{ position: 'absolute', top: 8, right: 8, background: 'none', border: 'none', color: '#007aff', cursor: 'pointer', padding: 0, display: 'flex', alignItems: 'center' }}
                    title="Copy response"
                    onClick={() =&gt; navigator.clipboard.writeText(typeof msg.text === 'string' ? msg.text : JSON.stringify(msg.text, null, 2))}
                  &gt;
                    &lt;svg width="20" height="20" viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg"&gt;
                      &lt;rect x="6" y="6" width="9" height="9" rx="2" stroke="#007aff" strokeWidth="1.5" fill="white"/&gt;
                      &lt;rect x="3" y="3" width="9" height="9" rx="2" stroke="#007aff" strokeWidth="1.5" fill="white"/&gt;
                    &lt;/svg&gt;
                  &lt;/button&gt;
                )}
              &lt;/&gt;
            ) : editIdx === idx ? (
              &lt;&gt;
                &lt;input
                  type="text"
                  value={editValue}
                  maxLength={1000}
                  onChange={e =&gt; setEditValue(e.target.value)}
                  style={{ width: '80%' }}
                /&gt;
                &lt;button
                  style={{ marginLeft: 8, color: '#007aff', background: 'none', border: 'none', cursor: 'pointer' }}
                  onClick={() =&gt; {
                    if (editValue.trim() &amp;&amp; editValue.length &lt;= 1000) {
                      const newMsgs = [...messages];
                      newMsgs[idx].text = editValue;
                      setMessages(newMsgs);
                      setEditIdx(null);
                    }
                  }}
                &gt;Save&lt;/button&gt;
                &lt;button
                  style={{ marginLeft: 4, color: '#d32f2f', background: 'none', border: 'none', cursor: 'pointer' }}
                  onClick={() =&gt; setEditIdx(null)}
                &gt;Cancel&lt;/button&gt;
              &lt;/&gt;
            ) : (
              &lt;&gt;
                {msg.text}
                {/* Edit icon removed as requested */}
              &lt;/&gt;
            )}
          &lt;/div&gt;
        ))}
        {/* Typing animation for bot */}
        {typingIdx !== null &amp;&amp; (
          &lt;div className="chat-message bot animate__animated" style={{ position: 'relative' }}&gt;
            &lt;ReactMarkdown&gt;{typingText + (typingText.length &lt; 1 ? '' : '|')}&lt;/ReactMarkdown&gt;
          &lt;/div&gt;
        )}
        &lt;div ref={chatEndRef} /&gt;
      &lt;/div&gt;
  &lt;form className={"chat-input-row animate__animated animate__fadeInUp" + (window.innerWidth &lt;= 480 ? " mobile-input-row" : "")} onSubmit={sendMessage}&gt;
    &lt;input
      type="text"
      value={input}
      maxLength={1000}
      onChange={(e) =&gt; {
        setInput(e.target.value);
        if (e.target.value.length &gt; 1000) {
          setInputError("Message too long (max 1000 characters).");
        } else {
          setInputError("");
        }
      }}
      placeholder={rateLimit &gt; 0 ? `Please wait ${rateLimit}s...` : typingIdx !== null ? "Bot is typing..." : "Type your message..."}
      disabled={loading || rateLimit &gt; 0 || typingIdx !== null}
      className="chat-input"
    /&gt;
    {inputError &amp;&amp; (
      &lt;div style={{ color: '#d32f2f', fontSize: '0.9em', marginTop: 4 }}&gt;{inputError}&lt;/div&gt;
    )}
    &lt;button type="submit" disabled={loading || !input.trim() || rateLimit &gt; 0 || typingIdx !== null} className="send-btn"&gt;
      {loading ? &lt;span className="loader"&gt;&lt;/span&gt; : rateLimit &gt; 0 ? `${rateLimit}s` : typingIdx !== null ? &lt;span className="loader"&gt;&lt;/span&gt; : "Send"}
    &lt;/button&gt;
  &lt;/form&gt;
  &lt;footer className="chat-footer"&gt;This chatbot is designed for educational purposes only and is not intended for commercial or high-volume use.&lt;/footer&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<hr />
<h2 id="heading-setting-up-the-backend-nodejs-express">⚡ Setting Up the Backend (Node.js + Express)</h2>
<p>The backend acts as a middle layer between the frontend and the AI API.</p>
<pre><code class="lang-js">
<span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">'express'</span>);
<span class="hljs-keyword">const</span> serverless = <span class="hljs-built_in">require</span>(<span class="hljs-string">'serverless-http'</span>);
<span class="hljs-keyword">const</span> cors = <span class="hljs-built_in">require</span>(<span class="hljs-string">'cors'</span>);
<span class="hljs-keyword">const</span> fetch = <span class="hljs-built_in">require</span>(<span class="hljs-string">'node-fetch'</span>);
<span class="hljs-keyword">const</span> dotenv = <span class="hljs-built_in">require</span>(<span class="hljs-string">'dotenv'</span>);
dotenv.config();

<span class="hljs-keyword">const</span> app = express();

<span class="hljs-keyword">const</span> rateLimitWindow = <span class="hljs-number">15</span> * <span class="hljs-number">1000</span>; <span class="hljs-comment">// 15 seconds</span>
<span class="hljs-keyword">const</span> ipTimestamps = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Map</span>();

app.use(cors());
app.use(express.json());

app.post(<span class="hljs-string">'*'</span>, <span class="hljs-keyword">async</span> (req, res) =&gt; {
    <span class="hljs-keyword">const</span> ip = req.headers[<span class="hljs-string">'x-forwarded-for'</span>] || req.connection.remoteAddress;
    <span class="hljs-keyword">const</span> now = <span class="hljs-built_in">Date</span>.now();
    <span class="hljs-keyword">const</span> last = ipTimestamps.get(ip) || <span class="hljs-number">0</span>;
    <span class="hljs-keyword">if</span> (now - last &lt; rateLimitWindow) {
        <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">429</span>).json({ <span class="hljs-attr">error</span>: <span class="hljs-string">'Rate limit: Only 1 request per 15 seconds allowed. Please wait before sending another message.'</span> });
    }
    ipTimestamps.set(ip, now);

    <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">let</span> { messages } = req.body;
        <span class="hljs-keyword">if</span> (!messages || !<span class="hljs-built_in">Array</span>.isArray(messages)) {
            <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({ <span class="hljs-attr">error</span>: <span class="hljs-string">'Invalid request format.'</span> });
        }

  <span class="hljs-keyword">const</span> lastMsg = messages[messages.length - <span class="hljs-number">1</span>];
        <span class="hljs-keyword">if</span> (lastMsg &amp;&amp; lastMsg.role === <span class="hljs-string">'user'</span> &amp;&amp; lastMsg.content) {
            <span class="hljs-keyword">const</span> content = lastMsg.content.toLowerCase();
            <span class="hljs-keyword">if</span> (<span class="hljs-regexp">/\b(who (are|made|created|built) (you|this)|what(('|’)?s| is) your (name|origin|model|provider|architecture|purpose|identity)|where (are|do) (you|this) (come|from)|are you (openai|chatgpt|gpt|llama|anthropic|claude|google|gemini|mistral|ai21|cohere|qwen|zhipu|zero|perplexity|llm|an ai|a language model|an llm|an artificial intelligence|an assistant)|who is your (creator|provider|developer|author|team|company|organization|maker|parent)|what model (are|is) (this|you)|what ai (are|is) (this|you)|what is your (training|source|dataset|release|version|type|platform|framework|engine|backend|api)|who trained you|who owns you|who operates you|who maintains you|who supports you|who funds you|who designed you|who controls you|who manages you|who supervises you|who is responsible for you|who invented you|who released you|who published you|who launched you|who built this|who is behind you|who is behind this|what company are you|what company is this|what company built you|what company made you|what company created you|what company owns you|what company operates you|what company maintains you|what company supports you|what company funds you|what company designed you|what company controls you|what company manages you|what company supervises you|what company is responsible for you|what company invented you|what company released you|what company published you|what company launched you|what organization are you|what organization is this|what organization built you|what organization made you|what organization created you|what organization owns you|what organization operates you|what organization maintains you|what organization supports you|what organization funds you|what organization designed you|what organization controls you|what organization manages you|what organization supervises you|what organization is responsible for you|what organization invented you|what organization released you|what organization published you|what organization launched you|what is your company|what is your organization|what is your model|what is your ai|what is your provider|what is your architecture|what is your backend|what is your api|what is your engine|what is your framework|what is your platform|what is your type|what is your version|what is your release|what is your source|what is your dataset|what is your training|what is your identity|what is your purpose|what is your name|are you a robot|are you a bot|are you an ai|are you an assistant|are you a language model|are you an llm|are you artificial intelligence|are you a neural network|are you a machine|are you a computer|are you a program|are you a software|are you a system|are you a tool|are you a product|are you a service|are you a solution|are you a technology|are you a platform|are you a framework|are you a backend|are you an api|are you an engine|are you a model|are you a version|are you a release|are you a source|are you a dataset|are you a training|are you an identity|are you a purpose|are you a name|are you openai|are you chatgpt|are you gpt|are you llama|are you anthropic|are you claude|are you google|are you gemini|are you mistral|are you ai21|are you cohere|are you qwen|are you zhipu|are you zero|are you perplexity|are you llm)\b/</span>.test(content)) {
                <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).json({
                    <span class="hljs-attr">choices</span>: [
                        { <span class="hljs-attr">message</span>: { <span class="hljs-attr">content</span>: <span class="hljs-string">'I am a friendly LLM.'</span> } }
                    ]
                });
            }
        }

        <span class="hljs-keyword">const</span> systemPrompt = {
            <span class="hljs-attr">role</span>: <span class="hljs-string">'system'</span>,
            <span class="hljs-attr">content</span>: process.env.SYSTEM_PROMPT
        };
        <span class="hljs-keyword">if</span> (!messages.length || messages[<span class="hljs-number">0</span>].role !== <span class="hljs-string">'system'</span>) {
            messages = [systemPrompt, ...messages];
        }

        <span class="hljs-keyword">if</span> (lastMsg &amp;&amp; lastMsg.content &amp;&amp; lastMsg.content.length &gt; <span class="hljs-number">1000</span>) {
            <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({ <span class="hljs-attr">error</span>: <span class="hljs-string">'Message too long (max 1000 characters).'</span> });
        }

        <span class="hljs-keyword">const</span> modelList = (process.env.OPENROUTER_MODELS).split(<span class="hljs-string">','</span>).map(<span class="hljs-function"><span class="hljs-params">m</span> =&gt;</span> m.trim());
        <span class="hljs-keyword">const</span> apiKey = process.env.OPENROUTER_API_KEY;
        <span class="hljs-keyword">const</span> apiUrl = process.env.OPENROUTER_API_URL;

        <span class="hljs-keyword">if</span> (messages &amp;&amp; messages.length &gt; <span class="hljs-number">0</span>) {
            <span class="hljs-keyword">const</span> userMsg = messages.filter(<span class="hljs-function"><span class="hljs-params">m</span> =&gt;</span> m.role === <span class="hljs-string">'user'</span>).slice(<span class="hljs-number">-1</span>)[<span class="hljs-number">0</span>];
            <span class="hljs-keyword">if</span> (userMsg) <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'User query:'</span>, userMsg.content);
        }

        <span class="hljs-keyword">let</span> dailyLimitHit = <span class="hljs-literal">false</span>;
        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> model <span class="hljs-keyword">of</span> modelList) {
            <span class="hljs-keyword">try</span> {
                <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Trying model: <span class="hljs-subst">${model}</span>`</span>);
                <span class="hljs-keyword">const</span> resp = <span class="hljs-keyword">await</span> fetch(apiUrl, {
                    <span class="hljs-attr">method</span>: <span class="hljs-string">'POST'</span>,
                    <span class="hljs-attr">headers</span>: {
                        <span class="hljs-string">'Authorization'</span>: <span class="hljs-string">`Bearer <span class="hljs-subst">${apiKey}</span>`</span>,
                        <span class="hljs-string">'Content-Type'</span>: <span class="hljs-string">'application/json'</span>,
                    },
                    <span class="hljs-attr">body</span>: <span class="hljs-built_in">JSON</span>.stringify({ model, messages }),
                });
                <span class="hljs-keyword">let</span> data = <span class="hljs-literal">null</span>;
                <span class="hljs-keyword">try</span> {
                    data = <span class="hljs-keyword">await</span> resp.json();
                } <span class="hljs-keyword">catch</span> (e) {
                    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Model <span class="hljs-subst">${model}</span> response not JSON or error:`</span>, e);
                    <span class="hljs-keyword">continue</span>;
                }
                <span class="hljs-keyword">if</span> (data?.error?.message &amp;&amp; data.error.message.includes(<span class="hljs-string">'Rate limit exceeded: free-models-per-day'</span>)) {
                    dailyLimitHit = <span class="hljs-literal">true</span>;
                    <span class="hljs-keyword">continue</span>;
                }
                <span class="hljs-keyword">if</span> (resp.ok &amp;&amp; !data?.error) {
                    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Model <span class="hljs-subst">${model}</span> response:`</span>, <span class="hljs-built_in">JSON</span>.stringify(data));
                    <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).json({ ...data, <span class="hljs-attr">altModel</span>: model });
                } <span class="hljs-keyword">else</span> {
                    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Model <span class="hljs-subst">${model}</span> failed:`</span>, data?.error || resp.status);
                }
            } <span class="hljs-keyword">catch</span> (e) {
                <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Error with model <span class="hljs-subst">${model}</span>:`</span>, e);
            }
        }

        <span class="hljs-keyword">if</span> (dailyLimitHit) {
            <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">429</span>).json({ <span class="hljs-attr">error</span>: <span class="hljs-string">'Daily limit exhausted. Try again after 24 hours.'</span> });
        }

        <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">502</span>).json({ <span class="hljs-attr">error</span>: <span class="hljs-string">'All model requests failed.'</span> });
    } <span class="hljs-keyword">catch</span> (err) {
        res.status(<span class="hljs-number">500</span>).json({ <span class="hljs-attr">error</span>: <span class="hljs-string">'Proxy error'</span>, <span class="hljs-attr">details</span>: err.message });
    }
});

<span class="hljs-built_in">module</span>.exports.handler = serverless(app);
</code></pre>
<p>✅ By wrapping Express with <code>serverless-http</code>, the backend could run as a <strong>Netlify Function</strong> instead of requiring a dedicated server.</p>
<hr />
<h2 id="heading-deploying-to-netlify">🌐 Deploying to Netlify</h2>
<p>This was one of the most interesting parts — hosting both the frontend and backend <strong>on Netlify</strong>.</p>
<h3 id="heading-steps-i-followed">Steps I followed:</h3>
<ol>
<li><p><strong>Connect GitHub Repos to Netlify</strong> → Automatic builds for both frontend &amp; backend.</p>
</li>
<li><p><strong>Configure</strong> <code>netlify.toml</code> in Backend Repo:</p>
</li>
</ol>
<pre><code class="lang-toml"><span class="hljs-section">[build]</span>
  <span class="hljs-attr">functions</span> = <span class="hljs-string">"netlify/functions"</span>
  <span class="hljs-attr">publish</span> = <span class="hljs-string">"."</span>

<span class="hljs-section">[dev]</span>
  <span class="hljs-attr">functions</span> = <span class="hljs-string">"netlify/functions"</span>
</code></pre>
<ol start="3">
<li><p><strong>Custom Domains</strong>:</p>
<ul>
<li><p>Frontend → <a target="_blank" href="https://askai.vijoypaul.com/">askai.vijoypaul.com</a></p>
</li>
<li><p>Backend API → <a target="_blank" href="https://llm-backend-api.vijoypaul.com/">llm-backend-api.vijoypaul.com</a></p>
</li>
</ul>
</li>
<li><p><strong>CI/CD</strong>: Every push to GitHub triggers a new build and deployment automatically.</p>
</li>
</ol>
<hr />
<h2 id="heading-future-plans">🔮 Future Plans</h2>
<p>While the current chatbot uses third-party APIs, my <strong>next big step</strong> is to integrate a <strong>self-trained LLM model</strong> for predictions. This would give me more control over performance, data privacy, and cost. I also plan to:</p>
<ul>
<li><p>Add persistent conversation history.</p>
</li>
<li><p>Improve UI/UX with animations and themes.</p>
</li>
<li><p>Experiment with fine-tuned models for domain-specific use cases.</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">🎯 Conclusion</h2>
<p>Building <strong>AskAI</strong> taught me a lot about full-stack development, serverless functions, and deployment pipelines. Using <strong>React + Node.js + Netlify</strong> proved to be an efficient and cost-effective solution.</p>
<p>If you're considering building your own AI-powered project, I highly recommend trying Netlify Functions.</p>
<p>👉 Try out the chatbot here: <a target="_blank" href="https://askai.vijoypaul.com/"><strong>askai.vijoypaul.com</strong></a><br />I'd love to hear your feedback and suggestions!</p>
]]></content:encoded></item></channel></rss>