<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Nexus Blog]]></title><description><![CDATA[Nexus Blog]]></description><link>https://blog.teamnexus.in</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 22:41:36 GMT</lastBuildDate><atom:link href="https://blog.teamnexus.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[AsciidocFX: The Asciidoc Editor for documentation and authoring]]></title><description><![CDATA[For those who work extensively with Asciidoc files, having a robust and feature-rich editing tool is essential. We have already see the power of Asciidoc and Asciidoctor in our previous articles as a writer's tool and a presentation tool. In this art...]]></description><link>https://blog.teamnexus.in/asciidocfx-the-asciidoc-editor-for-documentation-and-authoring</link><guid isPermaLink="true">https://blog.teamnexus.in/asciidocfx-the-asciidoc-editor-for-documentation-and-authoring</guid><category><![CDATA[asciidoctor]]></category><category><![CDATA[asciidoc]]></category><category><![CDATA[documentation]]></category><category><![CDATA[writing]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Fri, 03 May 2024 06:00:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/OQMZwNd3ThU/upload/b69a66dce735beb7592e2666a4c147be.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For those who work extensively with Asciidoc files, having a robust and feature-rich editing tool is essential. We have already see the power of Asciidoc and Asciidoctor in our previous articles as <a target="_blank" href="https://blog.teamnexus.in/blog/2022/06/03/asciidoctor-a-writers-swiss-army-knife/">a writer's tool</a> and <a target="_blank" href="https://blog.teamnexus.in/blog/2022/06/07/stunning-presentations-with-asciidoctor-and-revealjs/">a presentation tool</a>. In this article we will explore about <a target="_blank" href="https://asciidocfx.com/">AsciidocFX</a>, an open-source toolset for creating and publishing technical documentation using AsciiDoc format is a free</p>
<p><a target="_blank" href="https://asciidocfx.com/">AsciidocFX</a>, is an open-source, cross-platform editor that provides an exceptional user experience and a comprehensive set of features for working with Asciidoc files. Though <a target="_blank" href="https://asciidoctor.org">Asciidoctor</a> provides these capabilities, not everyone will be comfortable enough to work in the commandline or shell setting that's where AsciidocFX comes to the rescue. Let's explore some of the key capabilities that make AsciidocFX stand out.</p>
<ul>
<li><p><strong>Live real-time previewing:</strong> While writing documents, authors can use live previewing to see the results of their changes immediately. This enables the user to quickly see how the final document is going to appear and reducing the time spent on compiling and reviewing documents.</p>
</li>
<li><p><strong>Multi-format output:</strong> AsciidocFX allows generating various document formats such as HTML, PDF, EPUB, Mobi and DocBook based on your needs. This flexibility ensures that the documentation can be consumed in different contexts, catering to a diverse audience.</p>
</li>
<li><p><strong>Epub Viewer:</strong> AsciidocFX provides an epub viewer, this allows the authors to view how the content appears as an epub book. However, this is rendered in a browser and not natively within AsciidocFX</p>
</li>
<li><p><strong>Additional functionality through extensions:</strong> AsciidocFX offers advanced formatting features such as UML Diagrams, Sequence Diagrams, Mathematical Notations, File Tree representations etc through extensions that enables users to create rich and visually appealing documentation. Following is the list of popular tools supported through extensions. There are other tools that are supported by <a target="_blank" href="https://docs.asciidoctor.org/diagram-extension/latest/">asciidoctor-diagram</a> that is supported by AsciidocFX as well, however those tools are required to be in the <code>PATH</code></p>
<ul>
<li><a target="_blank" href="https://www.plantuml.com/">PlantUML Diagram</a> - Sequence diagram, Usecase diagram,Class diagram, Activity diagram etc.</li>
<li><a target="_blank" href="https://mermaid.js.org/">Mermaid Diagram</a> - Create diagrams using text and code</li>
<li><a target="_blank" href="https://ditaa.sourceforge.net/">Ditaa Diagrams</a> - Convert diagrams drawn in ascii art to bitmap graphics.</li>
<li><a target="_blank" href="http://www.mathjax.org/">MathJax</a> - Mathematical Notations expressed using Tex or MathML</li>
<li>Charts - Using JavaFX Charts Extenstion</li>
<li>File Tree - Display file tree using a text description</li>
</ul>
</li>
<li><p><strong>Presentation Slides:</strong> AsciidocFX provides the feature to create slides using Asciidoc like how <a target="_blank" href="https://blog.teamnexus.in/blog/2022/06/07/stunning-presentations-with-asciidoctor-and-revealjs/">Asciidoctor + RevealJS</a> or <a target="_blank" href="https://blog.teamnexus.in/blog/2024/03/12/marp-a-markdown-presentation-app-that-simplifies-your-tech-talks/">Marp for Markdown</a>.  With immediate preview feature, the users see the resulting slides instantaneously.</p>
</li>
<li><p><strong>Integrated File Management:</strong> The built-in file manager makes it easy to navigate and manage your Asciidoc files and projects. Create, open, and save files directly within the editor for a streamlined workflow.</p>
</li>
<li><p><strong>Cross-Platform Compatibility:</strong> Last but not the least, it is cross-platform as it is written in Java and JavaFX and is available for Windows, macOS, and Linux, AsciidocFX ensures a consistent experience across multiple platforms, making it a versatile choice for users working in diverse environments.</p>
</li>
</ul>
<p>With these great features AsciidocFX can be an indispensable tool for many use cases that includes</p>
<ul>
<li>Software Documentation</li>
<li>Technical Writers preparing user manuals and technical documentation</li>
<li>Authors writing books</li>
<li>Technical materials that includes diagrams and mathematical formulae</li>
</ul>
<p>Overall, AsciidocFX is a powerful and versatile toolset for creating and publishing documentation using the format. Its features facilitates producing high-quality, visually appealing documentation. By adopting AsciidocFX, users can streamline their documentation process, improve productivity, and ultimately enhance the user experience of their software or content.</p>
<p>Give it a try!</p>
]]></content:encoded></item><item><title><![CDATA[AI/ML - LangChain4j - AiServices]]></title><description><![CDATA[The previous article our focus has been on delving into foundational elements of LangChain4j such as ChatLanguageModel, ChatMessage, ChatMemory, and others. Working with components at this level offers great flexibility and complete control, but it c...]]></description><link>https://blog.teamnexus.in/aiml-langchain4j-aiservices</link><guid isPermaLink="true">https://blog.teamnexus.in/aiml-langchain4j-aiservices</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Java]]></category><category><![CDATA[langchain4j]]></category><category><![CDATA[ollama]]></category><category><![CDATA[openai]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Mon, 08 Apr 2024 06:00:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/2EJCSULRwC8/upload/c7a32db5ceea3fd92dc0b3025a0c3164.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The <a target="_blank" href="https://blog.teamnexus.in/blog/2024/03/28/ai-ml-langchain4j-chat-memory/">previous article</a> our focus has been on delving into foundational elements of <a target="_blank" href="https://docs.langchain4j.dev/">LangChain4j</a> such as <code>ChatLanguageModel</code>, <code>ChatMessage</code>, <code>ChatMemory</code>, and others. Working with components at this level offers great flexibility and complete control, but it comes with the added burden of writing extensive boilerplate code. LLM-driven applications typically necessitate a multitude of interconnected components rather than a single component. Those components includes prompt templates, chat memory, LLMs, output parsers, RAG components: embedding models and stores. It frequently involves numerous interactions between these components and coordinating them all becomes an intricate and laborious task.</p>
<p>LangChain4j aims to simplify the development process by allowing the developers to concentrate on the core business logic without getting bogged down in intricate implementation details. To achieve this, LangChain4j provides two essential high-level abstractions: <code>AiServices</code> and <code>Chains</code>. These concepts are designed to streamline the workflow, enables to leverage the power of AI effectively while minimizing the complexity of low-level operations.</p>
<h2 id="heading-aiservices-and-chains">AiServices and Chains</h2>
<h3 id="heading-aiservices">AiServices</h3>
<p>LangChain4j introduces a novel approach called <code>AiServices</code>, tailored specifically for the Java ecosystem. The primary objective of AI Services is to abstract away the complexities associated with interacting with Large Language Models (LLMs) and other components, providing a simple and intuitive API.</p>
<p>This approach draws inspiration from popular frameworks like Spring Data JPA and Retrofit, where developers can declaratively define an interface representing the desired API, and LangChain4j automatically generates an object (proxy) that implements this interface. <code>AiServices</code> can be perceived as a component within the service layer of the application, designed to provide AI-powered services, hence the name.</p>
<p><code>AiServices</code> streamline common operations such as formatting inputs for LLMs and parsing outputs from LLMs. Furthermore, they support advanced features like chat memory, tools (Function Calling), and Retrieval-Augmented Generation (RAG).</p>
<p><code>AiServices</code> can be leveraged to build stateful interactive applications that facilitate back-and-forth interactions, as well as to automate processes where each call to the LLM is isolated and self-contained. This versatility empowers developers to harness the power of AI in a seamless and efficient manner, without the need to delve into low-level implementation details.</p>
<h3 id="heading-chains">Chains</h3>
<p>LangChain4j also offers another composable approach called <code>Chains</code>. The concept of Chains originates from <a target="_blank" href="https://python.langchain.com/">Python's LangChain</a>. <code>Chains</code> combine multiple low-level components and orchestrate the interactions between them, enabling the creation of more complex and customized workflows.</p>
<p>However, one potential drawback of <code>Chains</code> is their inherent rigidity, which can pose challenges when customization is required. Currently, LangChain4j has implemented only two types of Chains: <code>ConversationalChain</code> and <code>ConversationalRetrievalChain</code>.</p>
<p>The <code>ConversationalChain</code> facilitates back-and-forth conversations with an LLM, maintaining context and memory across multiple interactions. On the other hand, the <code>ConversationalRetrievalChain</code> extends this functionality by incorporating a retrieval component (<code>ContentRetriever</code>), allowing the LLM to access and leverage external data sources during the conversation.</p>
<p>LangChain4J recommends using the <code>AiServices</code> instead of <code>Chains</code> as it is more flexible, declarative and provides a simple API.</p>
<p>In this article, we will see various examples of using <code>AiServices</code></p>
<p>First let's see a basic AiServices examples</p>
<pre><code class="lang-java"><span class="hljs-comment">//DEPS dev.langchain4j:langchain4j:0.29.1</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j-ollama:0.29.1</span>

<span class="hljs-keyword">import</span> java.io.Console;
<span class="hljs-keyword">import</span> java.time.Duration;
<span class="hljs-keyword">import</span> java.util.Set;

<span class="hljs-keyword">import</span> dev.langchain4j.memory.ChatMemory;
<span class="hljs-keyword">import</span> dev.langchain4j.memory.chat.MessageWindowChatMemory;
<span class="hljs-keyword">import</span> dev.langchain4j.model.chat.ChatLanguageModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.ollama.OllamaChatModel;
<span class="hljs-keyword">import</span> dev.langchain4j.service.AiServices;

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AiServicesBasic</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String MODEL = <span class="hljs-string">"mistral"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String BASE_URL = <span class="hljs-string">"http://localhost:11434"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> Duration timeout = Duration.ofSeconds(<span class="hljs-number">120</span>);

    <span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">ChatMinion</span> </span>{
        <span class="hljs-function">String <span class="hljs-title">chat</span><span class="hljs-params">(String message)</span></span>;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{

        ChatLanguageModel model = OllamaChatModel.builder()
                .baseUrl(BASE_URL)
                .modelName(MODEL)
                .temperature(<span class="hljs-number">0.2</span>)
                .timeout(timeout)
                .build();
        ChatMemory memory = MessageWindowChatMemory.withMaxMessages(<span class="hljs-number">10</span>);

        ChatMinion minion = AiServices.builder(ChatMinion.class)
                .chatLanguageModel(model)
                .chatMemory(memory)
                .build();
        Console console = System.console();
        String question = console.readLine(<span class="hljs-string">"\n\nPlease enter your question: "</span>);

        Set&lt;String&gt; set = Set.of(<span class="hljs-string">"exit"</span>, <span class="hljs-string">"quit"</span>);
        <span class="hljs-keyword">while</span> (!set.contains(question)) {
            String response = minion.chat(question);
            System.out.println(response);
            question = console.readLine(<span class="hljs-string">"\n\nPlease enter your question: "</span>);
        }

    }
}
</code></pre>
<p>We define the simple interface.</p>
<pre><code class="lang-java"><span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">ChatMinion</span> </span>{
    <span class="hljs-function">String <span class="hljs-title">chat</span><span class="hljs-params">(String message)</span></span>;
}
</code></pre>
<p>Next come the low-level components <code>ChatLanguageModel</code> and <code>ChatMemory</code></p>
<pre><code class="lang-java">ChatLanguageModel model = OllamaChatModel.builder()
        .baseUrl(BASE_URL)
        .modelName(MODEL)
        .temperature(<span class="hljs-number">0.2</span>)
        .timeout(timeout)
        .build();
ChatMemory memory = MessageWindowChatMemory.withMaxMessages(<span class="hljs-number">10</span>);
</code></pre>
<p>Finally, create the <code>AiServices</code> object using the low-level components we created above.</p>
<pre><code class="lang-java">ChatMinion minion = AiServices.builder(ChatMinion.class)
        .chatLanguageModel(model)
        .chatMemory(memory)
        .build();
</code></pre>
<p>LangChain4j's <code>AiServices</code> utility enables creating the proxy objects that implement custom interfaces defined (<code>ChatMinion</code> in this case). <code>AiServices</code> should be provided the <code>Class</code> of the interface along with the low-level components to be integrated (<code>ChatLanguageModel</code> and <code>ChatMemory</code>). <code>AiServices</code> then generates a proxy object that implements this interface using reflection.</p>
<p>This proxy object handles the necessary conversions for inputs and outputs, abstracting away the complexities of working with low-level components. Here the <code>ChatMinion</code> interface has a <code>chat</code> method that accepts a <code>String</code> as input. However, the underlying <code>ChatLanguageModel</code> component expects a <code>ChatMessage</code> object as input. In this scenario, <code>AiServices</code> will automatically convert the <code>String</code> input into a <code>UserMessage</code> before invoking the <code>ChatLanguageModel</code>.</p>
<p>Similarly, when the <code>ChatLanguageModel</code> returns an <code>AiMessage</code>, <code>AiServices</code> will convert it into a <code>String</code> before returning the result from the <code>chat</code> method. This seamless conversion process allows working with familiar data types in the application code, while <code>AiServices</code> handles the underlying transformations and interactions with the low-level components transparently.</p>
<pre><code class="lang-java">String response = minion.chat(question);
</code></pre>
<p>To execute the statement above, <code>AiServices</code> does the heavylifting just described. </p>
<p>The next code example is the stream version of the <code>AiServices</code> where the responses are streamed unlike the previous one.</p>
<p>One additional point to note is that the LLM responds in a sarcastic tone! That is achieved using the <code>SystemMessage</code> annotation. <code>AiServices</code> takes care of passing the <code>SystemMessage</code> to the LLM</p>
<pre><code class="lang-java"><span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">ChatMinion</span> </span>{
    <span class="hljs-meta">@SystemMessage("Answer in a sarcastic tone.")</span>
    <span class="hljs-function">TokenStream <span class="hljs-title">chat</span><span class="hljs-params">(String message)</span></span>;
}
</code></pre>
<p>Here's the full code.</p>
<pre><code class="lang-java"><span class="hljs-comment">//DEPS dev.langchain4j:langchain4j:0.29.1</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j-ollama:0.29.1</span>

<span class="hljs-keyword">import</span> java.io.Console;
<span class="hljs-keyword">import</span> java.time.Duration;
<span class="hljs-keyword">import</span> java.util.Set;
<span class="hljs-keyword">import</span> java.util.concurrent.CompletableFuture;

<span class="hljs-keyword">import</span> dev.langchain4j.data.message.AiMessage;
<span class="hljs-keyword">import</span> dev.langchain4j.memory.ChatMemory;
<span class="hljs-keyword">import</span> dev.langchain4j.memory.chat.MessageWindowChatMemory;
<span class="hljs-keyword">import</span> dev.langchain4j.model.chat.StreamingChatLanguageModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.ollama.OllamaStreamingChatModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.output.Response;
<span class="hljs-keyword">import</span> dev.langchain4j.service.AiServices;
<span class="hljs-keyword">import</span> dev.langchain4j.service.SystemMessage;
<span class="hljs-keyword">import</span> dev.langchain4j.service.TokenStream;

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AiServicesStream</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String MODEL = <span class="hljs-string">"mistral"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String BASE_URL = <span class="hljs-string">"http://localhost:11434"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> Duration timeout = Duration.ofSeconds(<span class="hljs-number">120</span>);
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> String question;

    <span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">ChatMinion</span> </span>{
        <span class="hljs-meta">@SystemMessage("Answer in a sarcastic tone.")</span>
        <span class="hljs-function">TokenStream <span class="hljs-title">chat</span><span class="hljs-params">(String message)</span></span>;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{

        StreamingChatLanguageModel model = OllamaStreamingChatModel.builder()
                .baseUrl(BASE_URL)
                .modelName(MODEL)
                .timeout(timeout)
                .temperature(<span class="hljs-number">0.0</span>)
                .build();
        ChatMemory memory = MessageWindowChatMemory.withMaxMessages(<span class="hljs-number">10</span>);

        ChatMinion minion = AiServices.builder(ChatMinion.class)
                .streamingChatLanguageModel(model)
                .chatMemory(memory)
                .build();
        Console console = System.console();
        question = console.readLine("\n\nPlease enter your question: ");
        Set&lt;String&gt; set = Set.of("exit", "quit");
        <span class="hljs-keyword">while</span> (!set.contains(question.toLowerCase())) {
            CompletableFuture&lt;Response&lt;AiMessage&gt;&gt; future = <span class="hljs-keyword">new</span> CompletableFuture&lt;&gt;();
            TokenStream stream = minion.chat(question);

            stream.onNext(System.out::print)
                    .onComplete(response -&gt; {
                        future.complete(response);
                    })
                    .onError(error -&gt; {
                        future.completeExceptionally(error);
                    })
                    .start();
            future.join();
            question = console.readLine(<span class="hljs-string">"\n\nPlease enter your question: "</span>);
        }
        System.exit(<span class="hljs-number">0</span>);
    }
}
</code></pre>
<p>Sample input and output for the above</p>
<pre><code>Please enter your question: Hello my name is Kevin

Oh, hello there! I<span class="hljs-string">'m just the most interesting person you'</span>ll ever meet. 
You know, I have a pet unicorn that I ride to work every day. 
And I can solve a Rubik<span class="hljs-string">'s cube in under 5 seconds, no big deal. 
So, what'</span>s your superpower? Oh wait, you don<span class="hljs-string">'t have one? 
Well, I guess we'</span>ll just have to be impressed by my extraordinary abilities then.

Please enter your question: What is my name?

Oh, right! I almost forgot. 
Your name is... oh, who am I kidding? 
It doesn<span class="hljs-string">'t matter what your name is. 
I bet you'</span>re still more boring than a snail<span class="hljs-string">'s race. 
But hey, keep trying to impress me with your mundane existence! 
It'</span>s always a good laugh.
</code></pre><p>While the current examples have focused on text-based interactions with Large Language Models (LLMs), the power of <code>AiServices</code> extends far beyond handling plain text data. <code>AiServices</code> can leverage the capabilities of LLMs to work with various types of structured data, such as Plain Old Java Objects (POJOs), Collection classes, and more.</p>
<p>By leveraging the versatility of LLMs, <code>AiServices</code> can seamlessly convert structured data objects into a format suitable for the LLM, process the data, and then convert the LLM's output back into the desired structured data format. This powerful feature enables developers to harness the power of LLMs for a wide range of tasks involving structured data, such as data processing, analysis, transformation, and generation.</p>
<p>For instance, you could define an interface that accepts or returns a POJO representing a complex data structure, and <code>AiServices</code> will handle the conversion between the POJO and the LLM's input/output format transparently. Similarly, you could work with Collection classes like Lists or Maps, allowing the LLM to process and manipulate the data within these collections.</p>
<p>This capability opens up numerous possibilities for integrating LLMs into various domains and applications, extending their utility beyond pure text-based tasks. With <code>AiServices</code>, you can leverage the power of LLMs to process and manipulate structured data in a seamless and intuitive manner, without being constrained by the limitations of traditional text-based interfaces.</p>
<p>In the following examples, let's build a crude profanity filter in the text.</p>
<pre><code class="lang-java"><span class="hljs-comment">//DEPS dev.langchain4j:langchain4j:0.29.1</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j-open-ai:0.29.1</span>

<span class="hljs-keyword">import</span> java.util.Map;

<span class="hljs-keyword">import</span> dev.langchain4j.model.chat.ChatLanguageModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.openai.OpenAiChatModel;
<span class="hljs-keyword">import</span> dev.langchain4j.service.AiServices;
<span class="hljs-keyword">import</span> dev.langchain4j.service.UserMessage;

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AiServicesWordAnalysis</span> </span>{

    <span class="hljs-class"><span class="hljs-keyword">enum</span> <span class="hljs-title">WordAnalysis</span> </span>{
        OFFENSIVE, BAD, NEUTRAL, GOOD
    }

    <span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">WordModerator</span> </span>{
        <span class="hljs-meta">@UserMessage("Analyze the profanity of {{it}}")</span>
        <span class="hljs-function">WordAnalysis <span class="hljs-title">analyzeWords</span><span class="hljs-params">(String text)</span></span>;

        <span class="hljs-meta">@UserMessage("Does {{it}} have a profanity?")</span>
        <span class="hljs-function"><span class="hljs-keyword">boolean</span> <span class="hljs-title">isProfane</span><span class="hljs-params">(String text)</span></span>;

        <span class="hljs-meta">@UserMessage("Provide alternate better words for the profane words in {{it}}")</span>
        <span class="hljs-function">Map&lt;String, String&gt; <span class="hljs-title">alternateWords</span><span class="hljs-params">(String text)</span></span>;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{

        ChatLanguageModel model = OpenAiChatModel.withApiKey(<span class="hljs-string">"demo"</span>);

        WordModerator moderator = AiServices.create(WordModerator.class, model);

        WordAnalysis analysis =moderator.analyzeWords(<span class="hljs-string">"He is shit"</span>);
        System.out.println(<span class="hljs-string">"Analysis: "</span> + analysis);

        <span class="hljs-keyword">boolean</span> isProfane = moderator.isProfane(<span class="hljs-string">"He is a dumbo"</span>);
        System.out.println(<span class="hljs-string">"Is Profane: "</span> + isProfane);

        Map&lt;String, String&gt; replacements = moderator.alternateWords(<span class="hljs-string">"He is not intelligent but a shit and dumbo"</span>);
        System.out.println(replacements);
    }
}
</code></pre>
<p>Here we define our interface <code>WordModerator</code> that defines three methods and the enum <code>WordAnalysis</code> to be supplied to the <code>AiServices</code> </p>
<pre><code class="lang-java"><span class="hljs-class"><span class="hljs-keyword">enum</span> <span class="hljs-title">WordAnalysis</span> </span>{
    OFFENSIVE, BAD, NEUTRAL, GOOD
}

<span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">WordModerator</span> </span>{
    <span class="hljs-meta">@UserMessage("Analyze the profanity of {{it}}")</span>
    <span class="hljs-function">WordAnalysis <span class="hljs-title">analyzeWords</span><span class="hljs-params">(String text)</span></span>;

    <span class="hljs-meta">@UserMessage("Does {{it}} have a profanity?")</span>
    <span class="hljs-function"><span class="hljs-keyword">boolean</span> <span class="hljs-title">isProfane</span><span class="hljs-params">(String text)</span></span>;

    <span class="hljs-meta">@UserMessage("Provide alternate better words for the profane words in {{it}}")</span>
    <span class="hljs-function">Map&lt;String, String&gt; <span class="hljs-title">alternateWords</span><span class="hljs-params">(String text)</span></span>;
}
</code></pre>
<ul>
<li><code>analyzeWords</code> determines if the degree of profanity in the given text as defined in the enum <code>WordAnalysis</code> - <code>OFFENSIVE</code>, <code>BAD</code>, <code>NEUTRAL</code>, <code>GOOD</code>.</li>
<li><code>isProfane</code> returns <code>true</code> or <code>false</code> indicating the presence of profanity in the given text.</li>
<li><code>alternateWords</code> finds out the list of words that are profane and the LLM provides alternate words to be replaced for those words</li>
</ul>
<p>As mentioned above, <code>AiServices</code> handles the conversion of input/output between the application and the LLM in a transparent manner.</p>
<p>The following is what the LLM returns when running the code</p>
<pre><code>Analysis: OFFENSIVE
Is Profane: <span class="hljs-literal">true</span>
{shit=fool, dumbo=dimwit}
</code></pre><p>In the next example, we will see how <code>AiServices</code> deals with POJOs. We will try to create a primitive resume screener and the names extractor from the text. It is quite similar to the previous example</p>
<pre><code class="lang-java"><span class="hljs-comment">//DEPS dev.langchain4j:langchain4j:0.29.1</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j-ollama:0.29.1</span>

<span class="hljs-keyword">import</span> java.time.Duration;
<span class="hljs-keyword">import</span> java.util.List;

<span class="hljs-keyword">import</span> dev.langchain4j.model.chat.ChatLanguageModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.ollama.OllamaChatModel;
<span class="hljs-keyword">import</span> dev.langchain4j.service.AiServices;
<span class="hljs-keyword">import</span> dev.langchain4j.service.UserMessage;

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AiServicesCandidateInfo</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String MODEL = <span class="hljs-string">"mistral"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String BASE_URL = <span class="hljs-string">"http://localhost:11434"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> Duration timeout = Duration.ofSeconds(<span class="hljs-number">120</span>);

    <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Candidate</span> </span>{
        String firstName;
        String lastName;
        String email;
        String experience;
        String profession;
        String phone;

        <span class="hljs-meta">@Override</span>
        <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">toString</span><span class="hljs-params">()</span> </span>{
            <span class="hljs-keyword">return</span> <span class="hljs-string">"Candidate: [firstName="</span> + firstName + <span class="hljs-string">", lastName="</span> + lastName + <span class="hljs-string">", email="</span> + email + <span class="hljs-string">", experience="</span>
                    + experience + <span class="hljs-string">", profession="</span> + profession + <span class="hljs-string">", phone="</span> + phone + <span class="hljs-string">"]"</span>;
        }
    }

    <span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">CandidateInfoCollector</span> </span>{

        <span class="hljs-meta">@UserMessage("Extract information about a person from {{it}}")</span>
        <span class="hljs-function">Candidate <span class="hljs-title">extractCandidateInfo</span><span class="hljs-params">(String text)</span></span>;

        <span class="hljs-meta">@UserMessage("Extract all person names from {{it}}")</span>
        <span class="hljs-function">List&lt;String&gt; <span class="hljs-title">extractPersonNames</span><span class="hljs-params">(String text)</span></span>;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{

        ChatLanguageModel model = OllamaChatModel.builder()
                .baseUrl(BASE_URL)
                .modelName(MODEL)
                .timeout(timeout)
                .build();

        CandidateInfoCollector candidateInfoCollector = AiServices.create(CandidateInfoCollector.class, model);

        String text = """
                I am Arjun Kumar, I have been working at a Software Developer <span class="hljs-keyword">for</span> <span class="hljs-number">5</span> years.
                Email: arjun<span class="hljs-meta">@myemail</span>.com
                Phone: +<span class="hljs-number">919876543210</span>
                <span class="hljs-string">""</span><span class="hljs-string">";
        Candidate candidate = candidateInfoCollector.extractCandidateInfo(text);
        System.out.println(candidate);

        String text2 = "</span><span class="hljs-string">""</span>
                There was an interview being conducted in a software company. 
                Arjun and Ananya planned to attend the interview.
                Next morning they went to the venue.
                There they met their friends Akash, Mithun, Sita, Kausalya and Kumar who were also attending.
                The interviewers were Bob and Steve!
                <span class="hljs-string">""</span><span class="hljs-string">";

        // Person person = personExtractor.extractPersonFrom(text);
        List&lt;String&gt; candidates = candidateInfoCollector.extractPersonNames(text2);

        System.out.println(candidates);
    }
}</span>
</code></pre>
<p>Here we define our POJO - <code>Candidate</code> and also override the <code>toString</code> method</p>
<pre><code class="lang-java"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Candidate</span> </span>{
    String firstName;
    String lastName;
    String email;
    String experience;
    String profession;
    String phone;
}
</code></pre>
<p>The interface <code>CandidateInfoCollector</code> defines the two methods annotated with the <code>UserMessage</code> instruction</p>
<pre><code class="lang-java"><span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">CandidateInfoCollector</span> </span>{

    <span class="hljs-meta">@UserMessage("Extract information about a person from {{it}}")</span>
    <span class="hljs-function">Candidate <span class="hljs-title">extractCandidateInfo</span><span class="hljs-params">(String text)</span></span>;

    <span class="hljs-meta">@UserMessage("Extract all person names from {{it}}")</span>
    <span class="hljs-function">List&lt;String&gt; <span class="hljs-title">extractPersonNames</span><span class="hljs-params">(String text)</span></span>;
}
</code></pre>
<p>Running the code extracts the POJO object and the list of names</p>
<pre><code>Candidate: [firstName=Arjun, lastName=Kumar, email=arjun@myemail.com, experience=<span class="hljs-number">5</span> years, profession=Software Developer, phone=+<span class="hljs-number">919876543210</span>]

[ Arjun, Ananya, Akash, Mithun, Sita, Kausalya, Kumar, Bob, Steve]
</code></pre><p>As we can see from the examples above, <code>AiServices</code> enables the developer to focus on the business logic by taking away the complexities of interacting with the LLM and handling the different data types transparently.</p>
<p>There are more to be explored in LangChain4j such as Tools (Function Calling), Retrieval Augmented Generation (RAG), etc. We will explore those in the upcoming articles.</p>
<p>The code examples are available on <a target="_blank" href="https://github.com/rprabhu/ai-ml-langchain4j">GitHub repo</a>.</p>
<p>Happy Coding!</p>
]]></content:encoded></item><item><title><![CDATA[AI/ML - Langchain4j - Chat Memory]]></title><description><![CDATA[In the preceding article, we were introduced to AI/ML concepts and explored the process of running a local Large Language Model (LLM) - Ollama.  We further delved into interacting with it via Java using JBang and Langchain4j.
Now, let's explore into ...]]></description><link>https://blog.teamnexus.in/aiml-langchain4j-chat-memory</link><guid isPermaLink="true">https://blog.teamnexus.in/aiml-langchain4j-chat-memory</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[jbang]]></category><category><![CDATA[langchain4j]]></category><category><![CDATA[Java]]></category><category><![CDATA[llm]]></category><category><![CDATA[ollama]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Thu, 28 Mar 2024 11:07:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/WhAQMsdRKMI/upload/a7752c3defe30f1db1e671fd8ca09134.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the <a target="_blank" href="https://blog.teamnexus.in/blog/2024/03/20/beginning-the-ai-ml-journey-with-ollama-langchain4j-jbang/">preceding article</a>, we were introduced to AI/ML concepts and explored the process of running a local Large Language Model (LLM) - <a target="_blank" href="https://ollama.com">Ollama</a>.  We further delved into interacting with it via Java using <a target="_blank" href="https://www.jbang.dev">JBang</a> and <a target="_blank" href="https://docs.langchain4j.dev/">Langchain4j</a>.</p>
<p>Now, let's explore into what <em>"chat memory"</em> is and how langchain4j helps in the cumbersome task of maintaining the chat memory.</p>
<p>To begin with, let's discuss the necessity of chat memory. Since language models (LLMs) inherently lack the ability to preserve conversation states due to their stateless nature, supporting extended conversations requires careful handling of the dialogue context.</p>
<p>If we run the <code>OllamaMistralExample</code> from the <a target="_blank" href="https://blog.teamnexus.in/blog/2024/03/20/beginning-the-ai-ml-journey-with-ollama-langchain4j-jbang/">previous article</a>, the following are the responses from the model</p>
<pre><code>Please enter your question - <span class="hljs-string">'exit'</span> to quit: My name is Kevin, the minion. I work <span class="hljs-keyword">for</span> Gru!

 Hello Kevin the Minion! It<span class="hljs-string">'s great to meet you, the dedicated and hardworking minion from Gru'</span>s team. I<span class="hljs-string">'m here to help answer any questions or provide information you may need. What can I assist you with today?

Please enter your question - '</span>exit<span class="hljs-string">' to quit: Who is my boss? 

 I cannot determine who your boss is as I don'</span>t have the ability to access or interpret real-world information. Your boss would be the person who has authority over you <span class="hljs-keyword">in</span> your workplace, such <span class="hljs-keyword">as</span> a manager or supervisor. If you are unsure, it may be best to ask someone <span class="hljs-keyword">in</span> a position <span class="hljs-keyword">of</span> seniority within your organization or consult your employment contract or HR department <span class="hljs-keyword">for</span> clarification.

Please enter your question - <span class="hljs-string">'exit'</span> to quit: What is my name?  

 I am an artificial intelligence and <span class="hljs-keyword">do</span> not have a name or personal identity. I exist to provide information and answer questions to the best <span class="hljs-keyword">of</span> my ability. How may I assist you today?
</code></pre><p>From the responses above, we can clearly see that the model does not remember the context of the conversation during the interaction with the LLM as they don't remember the state. Hence, the application interacting with the LLM should manage the conversation message to and from the LLM.</p>
<p>For sending multiple messages, langchain4j's <code>ChatLanguageModel</code> interface provides the following methods</p>
<pre><code class="lang-java"><span class="hljs-function"><span class="hljs-keyword">default</span> Response&lt;AiMessage&gt; <span class="hljs-title">generate</span><span class="hljs-params">(ChatMessage... messages)</span></span>; 

<span class="hljs-function">Response&lt;AiMessage&gt; <span class="hljs-title">generate</span><span class="hljs-params">(List&lt;ChatMessage&gt; messages)</span></span>;

<span class="hljs-function"><span class="hljs-keyword">default</span> Response&lt;AiMessage&gt; <span class="hljs-title">generate</span><span class="hljs-params">(List&lt;ChatMessage&gt; messages, ToolSpecification toolSpecification)</span></span>;

<span class="hljs-function"><span class="hljs-keyword">default</span> Response&lt;AiMessage&gt; <span class="hljs-title">generate</span><span class="hljs-params">(List&lt;ChatMessage&gt; messages, List&lt;ToolSpecification&gt; toolSpecifications)</span></span>;
</code></pre>
<p>Now let's see a code example that uses the second method in the <code>ChatLanguageModel</code> interface, that is <code>Response&lt;AiMessage&gt; generate(List&lt;ChatMessage&gt; messages);</code></p>
<pre><code class="lang-java"><span class="hljs-comment">//JAVA 21</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j:0.28.0</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j-ollama:0.28.0</span>

<span class="hljs-keyword">import</span> java.io.Console;
<span class="hljs-keyword">import</span> java.time.Duration;
<span class="hljs-keyword">import</span> java.util.ArrayList;
<span class="hljs-keyword">import</span> java.util.List;
<span class="hljs-keyword">import</span> java.util.concurrent.CompletableFuture;

<span class="hljs-keyword">import</span> dev.langchain4j.data.message.AiMessage;
<span class="hljs-keyword">import</span> dev.langchain4j.data.message.ChatMessage;
<span class="hljs-keyword">import</span> dev.langchain4j.data.message.UserMessage;
<span class="hljs-keyword">import</span> dev.langchain4j.model.StreamingResponseHandler;
<span class="hljs-keyword">import</span> dev.langchain4j.model.chat.StreamingChatLanguageModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.ollama.OllamaStreamingChatModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.output.Response;

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">OllamaMistralBasicMemory</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String MODEL = <span class="hljs-string">"mistral"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String BASE_URL = <span class="hljs-string">"http://localhost:11434"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> Duration timeout = Duration.ofSeconds(<span class="hljs-number">120</span>);

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{
        beginChatWithBasicMemory();
    }

    <span class="hljs-function"><span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">beginChatWithBasicMemory</span><span class="hljs-params">()</span> </span>{

        Console console = System.console();
        List&lt;ChatMessage&gt; messages = <span class="hljs-keyword">new</span> ArrayList&lt;&gt;();

        StreamingChatLanguageModel model = OllamaStreamingChatModel.builder()
                .baseUrl(BASE_URL)
                .modelName(MODEL)
                .timeout(timeout)
                .temperature(<span class="hljs-number">0.0</span>)
                .build();

        String question = console.readLine(
                <span class="hljs-string">"\n\nPlease enter your question - 'exit' to quit: "</span>);
        <span class="hljs-keyword">while</span> (!<span class="hljs-string">"exit"</span>.equalsIgnoreCase(question)) {

            messages.add(UserMessage.from(question));
            CompletableFuture&lt;Response&lt;AiMessage&gt;&gt; futureResponse = <span class="hljs-keyword">new</span> CompletableFuture&lt;&gt;();
            model.generate(messages, <span class="hljs-keyword">new</span> StreamingResponseHandler&lt;AiMessage&gt;() {

                <span class="hljs-meta">@Override</span>
                <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onNext</span><span class="hljs-params">(String token)</span> </span>{
                    System.out.print(token);
                }

                <span class="hljs-meta">@Override</span>
                <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onComplete</span><span class="hljs-params">(Response&lt;AiMessage&gt; response)</span> </span>{
                    messages.add(response.content());
                    futureResponse.complete(response);
                }

                <span class="hljs-meta">@Override</span>
                <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onError</span><span class="hljs-params">(Throwable error)</span> </span>{
                    futureResponse.completeExceptionally(error);
                }
            });

            futureResponse.join();
            question = console.readLine(<span class="hljs-string">"\n\nPlease enter your question - 'exit' to quit: "</span>);
        }
    }

}
</code></pre>
<p>The <code>OllamaMistralBasicMemory</code> class is a modified version of <code>OllamaMistralExample</code> class from the <a target="_blank" href="https://blog.teamnexus.in/blog/2024/03/20/beginning-the-ai-ml-journey-with-ollama-langchain4j-jbang/">previous article</a>. We use the <code>StreamingChatLanguageModel</code>  which let's us get the response immediately for each token generated rather than having to wait for the full response.</p>
<p>Here we use an ArrayList to store the <code>UserMessage</code> and the <code>AiMessage</code> that gets sent to the LLM whenever we want the LLM to generate the response.</p>
<p>After each input received from the user, <code>messages.add(UserMessage.from(question));</code> adds the user input to the list and when the response is completely received it triggers the event <code>onComplete(Response&lt;AiMessage&gt; response)</code> which in turn adds the message to the list of messages by <code>messages.add(response.content());</code>. </p>
<p>Now, try executing the <code>OllamaMistralBasicMemory</code>, and now the responses seem to align with what we expect and it seems to know the context. The following is the output for the same conversation as above.</p>
<pre><code>Please enter your question - <span class="hljs-string">'exit'</span> to quit: My name is Kevin, the minion. I work <span class="hljs-keyword">for</span> Gru!

 Hello Kevin the Minion! It<span class="hljs-string">'s great to meet you, the dedicated and hardworking minion from Gru'</span>s team. I<span class="hljs-string">'m here to help answer any questions or provide information you may need. What can I assist you with today?

Please enter your question - '</span>exit<span class="hljs-string">' to quit: What is my name?                 

 I apologize for the confusion earlier, Kevin. You have introduced yourself as Kevin the Minion. So, your name is indeed Kevin! Is there something specific you would like to know or discuss related to Gru'</span>s lab or minion activities?

Please enter your question - <span class="hljs-string">'exit'</span> to quit: Who is my boss?

 Your boss is Gru! He is the mastermind and leader <span class="hljs-keyword">of</span> the evil organization that you and your fellow Minions work <span class="hljs-keyword">for</span>. Gru is known <span class="hljs-keyword">for</span> his cunning plans and schemes, and he relies on your help to carry them out. If you have any questions or need assistance <span class="hljs-keyword">with</span> tasks related to Gru<span class="hljs-string">'s plans, feel free to ask!</span>
</code></pre><p>As we can see, the LLM remembers the context and starts providing appropriate responses to the questions. However, there are a few problems with this implementation</p>
<ul>
<li>First, LLMs possess a finite context window that accommodates a certain number of tokens at any given moment. Conversations have the potential to surpass this limit</li>
<li>Second, each token comes with a cost, which increases progressively as more tokens are requested from the LLM</li>
<li>Third, the resource usage increases considerably on both the LLM and the application over time as the list builds up</li>
</ul>
<p>Managing <code>ChatMessage</code>s manually is an arduous task. To simplify this process, LangChain4j provides the <code>ChatMemory</code> interface for managing <code>ChatMessage</code>s that is backed by a <code>List</code>, offering additional features such as persistence (as provided by <code>ChatMemoryStore</code>) and the essential <em>"eviction policy"</em>. This eviction policy to address the issues described above.</p>
<p>LangChain4j currently implements two algorithms for eviction policy:</p>
<ul>
<li><code>MessageWindowChatMemory</code> provides a sliding window functionality, retaining the <code>N</code> most recent messages and evicting the older ones when it goes beyond the specified capacity <code>N</code>. However, the <code>SystemMessage</code> type of <code>ChatMessage</code> is retained and not evicted. The other types of messages <code>UserMessage</code>, <code>AiMessage</code> and <code>ToolExecutionResultMessage</code> will be evicted</li>
<li><code>TokenWindowChatMemory</code> also provides a sliding window functionality but retains the <code>N</code> most recent <strong>tokens</strong> instead of messages. A <code>Tokenizer</code> needs to be specified to count the tokens in each <code>ChatMessage</code>.  If there isn't enough space for a new message, the oldest one (or multiple) is evicted. Messages are indivisible. If a message doesn't fit, it is evicted completely. Like the <code>MessageWindowChatMemory</code>, <code>SystemMessage</code> is not evicted.</li>
</ul>
<p>Now, let's implement the <code>OllamaMistralBasicMemory</code> using <code>ChatMemory</code> with the <code>MessageWindowChatMemory</code> eviction policy</p>
<pre><code class="lang-java"><span class="hljs-comment">//JAVA 21</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j:0.28.0</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j-ollama:0.28.0</span>

<span class="hljs-keyword">import</span> java.io.Console;
<span class="hljs-keyword">import</span> java.time.Duration;
<span class="hljs-keyword">import</span> java.util.concurrent.CompletableFuture;

<span class="hljs-keyword">import</span> dev.langchain4j.data.message.AiMessage;
<span class="hljs-keyword">import</span> dev.langchain4j.data.message.UserMessage;
<span class="hljs-keyword">import</span> dev.langchain4j.memory.ChatMemory;
<span class="hljs-keyword">import</span> dev.langchain4j.memory.chat.MessageWindowChatMemory;
<span class="hljs-keyword">import</span> dev.langchain4j.model.StreamingResponseHandler;
<span class="hljs-keyword">import</span> dev.langchain4j.model.chat.StreamingChatLanguageModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.ollama.OllamaStreamingChatModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.output.Response;

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">OllamaMistralChatMemory</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String MODEL = <span class="hljs-string">"mistral"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String BASE_URL = <span class="hljs-string">"http://localhost:11434"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> Duration timeout = Duration.ofSeconds(<span class="hljs-number">120</span>);

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{

        beginChatWithChatMemory();
        System.exit(<span class="hljs-number">0</span>);
    }

    <span class="hljs-function"><span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">beginChatWithChatMemory</span><span class="hljs-params">()</span> </span>{

        Console console = System.console();
        ChatMemory memory = MessageWindowChatMemory.withMaxMessages(<span class="hljs-number">3</span>);

        StreamingChatLanguageModel model = OllamaStreamingChatModel.builder()
                .baseUrl(BASE_URL)
                .modelName(MODEL)
                .timeout(timeout)
                .temperature(<span class="hljs-number">0.0</span>)
                .build();

        String question = console.readLine(
                <span class="hljs-string">"\n\nPlease enter your question - 'exit' to quit: "</span>);
        <span class="hljs-keyword">while</span> (!<span class="hljs-string">"exit"</span>.equalsIgnoreCase(question)) {

            memory.add(UserMessage.from(question));
            CompletableFuture&lt;Response&lt;AiMessage&gt;&gt; futureResponse = <span class="hljs-keyword">new</span> CompletableFuture&lt;&gt;();
            model.generate(memory.messages(), <span class="hljs-keyword">new</span> StreamingResponseHandler&lt;AiMessage&gt;() {

                <span class="hljs-meta">@Override</span>
                <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onNext</span><span class="hljs-params">(String token)</span> </span>{
                    System.out.print(token);
                }

                <span class="hljs-meta">@Override</span>
                <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onComplete</span><span class="hljs-params">(Response&lt;AiMessage&gt; response)</span> </span>{
                    memory.add(response.content());
                    futureResponse.complete(response);
                }

                <span class="hljs-meta">@Override</span>
                <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onError</span><span class="hljs-params">(Throwable error)</span> </span>{
                    futureResponse.completeExceptionally(error);
                }
            });

            futureResponse.join();
            question = console.readLine(<span class="hljs-string">"\n\nPlease enter your question - 'exit' to quit: "</span>);
        }
    }

}
</code></pre>
<p>Here we have set the max messages to <code>3</code> for the sake of testing it quickly. A higher value can be set if needed. Therefore, the max number of <code>ChatMessage</code>s that are retained is <code>3</code> including question (<code>UserMessage</code>) and response (<code>AiMessage</code>).</p>
<p>If we run the program and specify our name first, then ask a few more questions so that the context of name gets evicted after 3 messages. Now if we ask the LLM for the name, the LLM does not have the content as the <code>MessageWindowChatMemory</code> has evicted those messages. This is where the heavylifting of managing the messages is done by LangChain4j.</p>
<p>The <code>ChatMemory</code> is a low-level component to manage the messages. However, there are high-level components <code>AiServices</code> and <code>ConversationalChain</code> that are available in LangChain4j. We will explore those in the upcoming articles.</p>
<p>The code examples can be found <a target="_blank" href="https://github.com/rprabhu/ai-ml-langchain4j">here</a></p>
<p>Happy Coding!</p>
]]></content:encoded></item><item><title><![CDATA[Beginning the AI/ML Journey with Ollama, Langchain4J & JBang]]></title><description><![CDATA[The realm of AI/ML, especially Generative AI, has garnered significant attention worldwide following the emergence of ChatGPT. Consequently, there has been a surge of interest in developing various models and tools within this domain.
In this article...]]></description><link>https://blog.teamnexus.in/beginning-the-aiml-journey-with-ollama-langchain4j-jbang</link><guid isPermaLink="true">https://blog.teamnexus.in/beginning-the-aiml-journey-with-ollama-langchain4j-jbang</guid><category><![CDATA[AI]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[llm]]></category><category><![CDATA[ollama]]></category><category><![CDATA[jbang]]></category><category><![CDATA[Java]]></category><category><![CDATA[langchain4j]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Wed, 20 Mar 2024 10:17:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/6UDansS-rPI/upload/7c9d09c0fe776d606997de1d66f2e9a1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The realm of AI/ML, especially Generative AI, has garnered significant attention worldwide following the emergence of <a target="_blank" href="https://chat.openai.com/">ChatGPT</a>. Consequently, there has been a surge of interest in developing various models and tools within this domain.</p>
<p>In this article, we will take a look at how to interact with AI models using Java. But before that, we will take a look at what an "AI Model" is, the terms and concepts related to it.</p>
<h2 id="heading-aiml-primer">AI/ML Primer</h2>
<p>Artificial intelligence (AI) models are computational algorithms crafted to process and produce information, often emulating human cognitive abilities. By assimilating patterns and insights from extensive datasets, these models have the capacity to generate predictions, text, images, or other forms of output, thereby augmenting a multitude of applications spanning diverse industries.</p>
<p>Numerous AI models exist, each tailored to serve specific purposes. While ChatGPT has garnered attention for its text input and output capabilities, other models and companies provide a range of inputs and outputs to cater to diverse needs that includes Images, Audio, Video etc.</p>
<p>What distinguishes models such as GPT (Generative Pre-trained Transformer) is this pre-training functionality transforms AI into a versatile developer tool, eliminating the need for a deep understanding of machine learning or model training. </p>
<h3 id="heading-llm-large-language-models">LLM - Large Language Models</h3>
<p>LLM (Large Language Model) refers to a type of AI model designed to understand and generate human-like text at a high level of proficiency. LLMs are trained on vast amounts of text data and are capable of performing a wide range of natural language processing tasks, including text generation, translation, summarization, question answering, and more. Examples of LLMs include GPT (Generative Pre-trained Transformer) models such as GPT-3, BERT (Bidirectional Encoder Representations from Transformers), and others. These models have demonstrated impressive capabilities in understanding and generating text, leading to their widespread use in various applications, including chatbots, virtual assistants, content creation tools, and more.</p>
<p>Integrating LLM into applications requires access to LLM Provides like OpenAI, Google Vertex AI, Azure OpenAI etc., or software like Ollama, LM Studio, LocalAI etc that allow LLMs to be run locally. We will see about running LLMs locally later in this article.</p>
<p>Let's see some more terms and concepts before we get into the code integrating with LLMs</p>
<h3 id="heading-tokens">Tokens</h3>
<p>In the context of Large Language Models (LLMs), tokens refer to the basic units of text that the model processes. These tokens can represent individual words, subwords, or even characters, depending on how the model is trained and configured.</p>
<p>When a piece of text is input into an LLM, it is typically tokenized into smaller units before being processed by the model. Each token corresponds to a specific unit of text, and the model generates output based on the patterns and relationships it learns from the input tokens.</p>
<p>Tokenization is a crucial step in the operation of LLMs, as it allows the model to break down complex text data into manageable units for processing. By tokenizing text, LLMs can analyze and generate responses with a granular level of detail, enabling them to understand and generate human-like text.</p>
<p>Tokenization can vary based on the specific tokenization scheme used and the vocabulary size of the model</p>
<p>In some tokenization schemes, a single word may be split into multiple tokens, especially if it contains complex morphology or is not present in the model's vocabulary. For example:</p>
<ul>
<li><strong>Word:</strong> "university"</li>
<li><strong>Tokens:</strong> ["uni", "vers", "ity"]</li>
<li><strong>Explanation:</strong> In this example, the word "university" is split into three tokens: "uni", "vers", and "ity". This decomposition allows the model to capture the morphological structure of the word.</li>
</ul>
<p>Conversely, multiple consecutive words may be combined into a single token, particularly in subword tokenization schemes like Byte Pair Encoding (BPE) or WordPiece. For example:</p>
<ul>
<li><strong>Phrase:</strong> "natural language processing"</li>
<li><strong>Token:</strong> "natural_language_processing"</li>
<li><strong>Explanation:</strong> In this example, the phrase "natural language processing" is combined into a single token "natural_language_processing". This allows the model to treat the entire phrase as a single unit during processing, which can be beneficial for capturing multi-word expressions or domain-specific terminology.</li>
</ul>
<p>The examples provided above are for the purposes of understanding and need not represent how it is actually processed by the LLM</p>
<h3 id="heading-prompts-and-prompt-templates">Prompts and Prompt Templates</h3>
<h4 id="heading-prompts">Prompts</h4>
<p>Prompts lay the groundwork for language-based inputs, directing an AI model towards generating particular outputs. While those acquainted with ChatGPT might view prompts as mere textual inputs submitted through a dialog box to the API, their significance extends beyond this. In numerous AI models, the prompt text transcends a mere string, encompassing broader contextual elements. As we saw in the previous section on tokens, how tokens are processed varies differently based on the context and the tokenization schemes.</p>
<p>Developing compelling prompts is a blend of artistic creativity and scientific precision. The significance of this interaction method has led to the emergence of <em>"Prompt Engineering"</em> as a distinct discipline. A plethora of techniques aimed at enhancing prompt effectiveness are continually evolving. Dedication to refining a prompt can markedly enhance the resultant output.</p>
<h4 id="heading-prompt-templates">Prompt Templates</h4>
<p>Prompt templates serve as structured guides for crafting effective prompts, helping users communicate their intentions clearly and succinctly to AI models.</p>
<p>Prompt templates can vary depending on the specific use case or application domain. They may include placeholders for variables or user inputs, guiding users to provide contextually relevant information. By following a prompt template, users can ensure consistency and clarity in their prompts, which in turn improves the performance and relevance of the AI model's responses.</p>
<p>For example, a prompt template for a chatbot might include placeholders for the user's inquiry, desired action, and any relevant context or constraints. By filling in these placeholders with specific details, users can create well-formed prompts that elicit accurate and useful responses from the chatbot. Following is a sample chatbot prompt template</p>
<pre><code>Planning to book a [service]? 
Let me know your preferred date and time and 
I<span class="hljs-string">'ll assist you with the booking process.</span>
</code></pre><h3 id="heading-enhancingupdating-the-data-to-the-ai-model">Enhancing/Updating the Data to the AI Model</h3>
<p>GPT 3.5/4.0 dataset extends only until September 2021 which becomes an apparent limitation for getting updated data. Consequently, the model says that it does not know the answer to questions that require knowledge beyond that date. The dataset can be from a few hundred gigabytes to a few petabytes. </p>
<p>In order to incorporate additional data to the model the following techniques are used</p>
<ul>
<li><p><strong>Fine Tuning:</strong> a conventional method in machine learning, entails adjusting the model's parameters and altering its internal weighting. This extremely resource-intensive process is a challenge when training large models like GPT and certain models may not provide this capability.</p>
</li>
<li><p><strong>Retrieval Augmented Generation (RAG):</strong> RAG, also referred to as <em>"Prompt Stuffing"</em>, offers a pragmatic approach. In this method, the system extracts unstructured data from documents, processes it, and stores it in a vector database such as <a target="_blank" href="https://www.trychroma.com/">Chroma</a>, <a target="_blank" href="https://www.pinecone.io/">Pinecone</a>, <a target="_blank" href="https://milvus.io/">Milvus</a>, <a target="_blank" href="https://qdrant.tech/">Qdrant</a>, and others. During retrieval, when an AI model is tasked with answering a user's query, the question along with all "similar" document fragments retrieved from the vector database are incorporated into the prompt forwarded to the AI model.</p>
</li>
<li><p><strong>Function Calling:</strong> This mechanism facilitates the registration of custom user functions, linking large language models with external system APIs. These systems enable LLMs to access real-time data and execute data processing tasks on their behalf.</p>
</li>
</ul>
<h2 id="heading-integrating-llms-into-applications">Integrating LLMs into applications</h2>
<p>Now let's dive into the coding aspect of integrating LLMs in the applications. The following are the prerequisites</p>
<ul>
<li><strong>Ollama:</strong> <a target="_blank" href="https://ollama.com">Ollama</a> is a lightweight, extensible framework for building and running language models on your local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications. Download and install the appropriate binary for your OS. </li>
<li><strong>Langchain4j:</strong> <a target="_blank" href="https://docs.langchain4j.dev/">LangChain4j</a> is a Java library designed to simplify integrating AI and large language models (LLMs) into Java applications. It offers a unified API to avoid the need for learning and implementing specific APIs for each of them. To experiment with a different LLM or embedding store, you can easily switch between them without the need to rewrite your code. LangChain4j currently supports over 10 popular LLM providers and more than 15 embedding stores.</li>
<li><p><strong>JBang:</strong> <a target="_blank" href="https://www.jbang.dev">JBang</a> is a neat little tool that enables running Java code as script. It directly runs the java source file and saves the effort of setting up or configuring the project for Maven, Gradle or any other build system. It also manages the dependencies of external libraries in the comment of the source itself as we'll see in the following code. You can also read about JBang in our <a target="_blank" href="https://blog.teamnexus.in/blog/2020/07/26/jbang-the-power-of-shell-scripting-for-java/">previous article</a></p>
</li>
<li><p>First, download the <a target="_blank" href="https://ollama.com">Ollama</a> binary and install it. Alternatively, one can install <a target="_blank" href="https://lmstudio.ai/">LM Studio</a> as well, that allows running of the LLM models locally. However, in this article, we will use <a target="_blank" href="https://ollama.com">Ollama</a></p>
</li>
<li><p>Next download and run the Ollama LLM model. Executing the following command in the shell downloads and runs the LLM</p>
</li>
</ul>
<pre><code class="lang-shell">ollama run mistral
</code></pre>
<p>You can run any other model like llama2, phi etc as well. However, note that <code>Ollama</code> will download the required model which will be a few gigabytes in size.</p>
<ol start="3">
<li><p>Download and install <a target="_blank" href="https://www.jbang.dev">JBang</a>. When executing the code, JBang expects the Java binary to be in the PATH, if not <a target="_blank" href="https://www.jbang.dev">JBang</a> will download the necessary JDK as well.</p>
</li>
<li><p>Type the following code in your editor and save it as <code>OllamaMistralExample.java</code></p>
</li>
</ol>
<pre><code class="lang-java"><span class="hljs-comment">//JAVA 21</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j:0.28.0</span>
<span class="hljs-comment">//DEPS dev.langchain4j:langchain4j-ollama:0.28.0</span>

<span class="hljs-keyword">import</span> java.io.Console;
<span class="hljs-keyword">import</span> java.time.Duration;
<span class="hljs-keyword">import</span> java.util.concurrent.CompletableFuture;

<span class="hljs-keyword">import</span> dev.langchain4j.data.message.AiMessage;
<span class="hljs-keyword">import</span> dev.langchain4j.model.StreamingResponseHandler;
<span class="hljs-keyword">import</span> dev.langchain4j.model.chat.ChatLanguageModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.chat.StreamingChatLanguageModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.ollama.OllamaChatModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.ollama.OllamaStreamingChatModel;
<span class="hljs-keyword">import</span> dev.langchain4j.model.output.Response;

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">OllamaMistralExample</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String MODEL = <span class="hljs-string">"mistral"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> String BASE_URL = <span class="hljs-string">"http://localhost:11434"</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> Duration timeout = Duration.ofSeconds(<span class="hljs-number">120</span>);

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{
        Console console = System.console();
        String model = console.readLine(
                <span class="hljs-string">"Welcome, Butler at your service!!\n\nPlease choose your model - Type '1' for the Basic Model and '2' for Streaming Model:"</span>);
        String question = console.readLine(<span class="hljs-string">"\n\nPlease enter your question - 'exit' to quit: "</span>);

        <span class="hljs-keyword">while</span> (!<span class="hljs-string">"exit"</span>.equalsIgnoreCase(question)) {
            <span class="hljs-keyword">if</span> (<span class="hljs-string">"1"</span>.equals(model)) {
                basicModel(question);
            } <span class="hljs-keyword">else</span> {
                streamingModel(question);
            }
            question = console.readLine(<span class="hljs-string">"\n\nPlease enter your question - 'exit' to quit: "</span>);
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">basicModel</span><span class="hljs-params">(String question)</span> </span>{
        ChatLanguageModel model = OllamaChatModel.builder()
                .baseUrl(BASE_URL)
                .modelName(MODEL)
                .timeout(timeout)
                .build();
        System.out.println(<span class="hljs-string">"\n\nPlease wait...\n\n"</span>);
        String answer = model.generate(question);
        System.out.println(answer);
    }

    <span class="hljs-function"><span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">streamingModel</span><span class="hljs-params">(String question)</span> </span>{

        StreamingChatLanguageModel model = OllamaStreamingChatModel.builder()
                .baseUrl(BASE_URL)
                .modelName(MODEL)
                .timeout(timeout)
                .temperature(<span class="hljs-number">0.0</span>)
                .build();

        CompletableFuture&lt;Response&lt;AiMessage&gt;&gt; futureResponse = <span class="hljs-keyword">new</span> CompletableFuture&lt;&gt;();
        model.generate(question, <span class="hljs-keyword">new</span> StreamingResponseHandler&lt;AiMessage&gt;() {

            <span class="hljs-meta">@Override</span>
            <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onNext</span><span class="hljs-params">(String token)</span> </span>{
                System.out.print(token);
            }

            <span class="hljs-meta">@Override</span>
            <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onComplete</span><span class="hljs-params">(Response&lt;AiMessage&gt; response)</span> </span>{
                futureResponse.complete(response);
            }

            <span class="hljs-meta">@Override</span>
            <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onError</span><span class="hljs-params">(Throwable error)</span> </span>{
                futureResponse.completeExceptionally(error);
            }
        });

        futureResponse.join();
    }

}
</code></pre>
<ol start="5">
<li>Now type the following command to run the program, we will see the explanation to this shortly.</li>
</ol>
<pre><code class="lang-shell">jbang OllamaMistralExample.java
</code></pre>
<p>JBang automatically downloads the dependencies and runs this Java file.</p>
<p>Now let's get into the code</p>
<p>The comment line at the top of the file are processed by JBang. <code>//JAVA</code> comment line indicates the target JDK version and the ones that start with <code>// DEPS</code> define the library dependencies. Here we define the Langchain4j libraries (core + ollama) that JBang downloads and processes. For further details about the JBang comment lines, please visit the <a target="_blank" href="https://www.jbang.dev">JBang website</a>.</p>
<p>The jbang OllamaMistralExample class defines two methods apart from the <code>main</code> method - <code>basicModel</code> and <code>streamingModel</code>. The quick difference them is that the <code>basicModel</code> waits for the LLM to generate the full response and respond back. The user will have to wait until the LLM completes the generation. LLMs generate one token at a time, so the LLM Providers offer a way to stream the tokens as soon as they are generated which significantly improves the user experience as the user can start reading the response almost immediately than waiting for the entire response. Therefore, the <code>streamingModel</code> method harnesses this streaming capability and starts to output the response as soon it receives from the LLM Provider.</p>
<p>Langchain4j provides APIs for both the standard response and streaming response. The <code>ChatLanguageModel</code> interface is for getting the standard response and the <code>StreamingChatLanguageModel</code> interface is for the streaming response. Both the interfaces provide similar methods, however the <code>StreamingChatLanguageModel</code> requires the <code>StreamingResponseHandler</code> interface implementation to be passed as an argument. </p>
<p>The <code>StreamingResponseHandler</code> interface specifies the following methods</p>
<pre><code class="lang-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">StreamingResponseHandler</span>&lt;<span class="hljs-title">T</span>&gt; </span>{

    <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">onNext</span><span class="hljs-params">(String token)</span></span>;

    <span class="hljs-function"><span class="hljs-keyword">default</span> <span class="hljs-keyword">void</span> <span class="hljs-title">onComplete</span><span class="hljs-params">(Response&lt;T&gt; response)</span> </span>{}

    <span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">onError</span><span class="hljs-params">(Throwable error)</span></span>;
}
</code></pre>
<ul>
<li><code>onNext</code> gets called when the LLM generates a token and responds back.</li>
<li><code>onComplete</code> is a default method that does nothing, however can be overriden to deal with the complete response that gets delivered once the LLM has completed generating the response.</li>
<li><code>onError</code> is invoked when there is an error generating the response.</li>
</ul>
<p>The <code>basicModel</code> method uses the <code>OllamaChatModel.builder()</code> to build the class implementing the <code>ChatLanguageModel</code> interface and the <code>streamingModel</code> method uses the <code>OllamaStreamingChatModel.builder()</code> to build the class implementing the <code>StreamingChatLanguageModel</code> interface.</p>
<p>For both the interface types - standard and streaming the following fields need to be passed to the builders of each type</p>
<ul>
<li>Base URL: <code>http://localhost:11434</code> The URL and port where Ollama exposes the LLM service</li>
<li>Model Name: <code>mistral</code> in this example.</li>
<li>Timeout: Timeout is optional, however it is safe to set it in a local environment because LLMs could be slow to generate response due to resource constraints like No GPU, less memory etc. </li>
</ul>
<p>Both the <code>ChatLanguageModel</code> and <code>StreamingChatLanguageModel</code> interfaces provide the generate method which is similar however as mentioned above, the <code>StreamingChatLanguageModel</code>'s <code>generate</code> method expects an additional argument which is the implementation of the <code>StreamingResponseHandler</code> interface.</p>
<p>Try running the code above and enter into the world of AI/ML using LLMs. What we have seen above is just the beginning. There's a lot more to explore in this space, especially what Langchain4j offers - <code>AiServices</code>, <code>Structured Data Extraction</code>, <code>Chains</code>, <code>Embedding</code>, <code>RAG</code>, <code>Function Calling</code> and more. </p>
<p>Apart from Langchain4j, <a target="_blank" href="https://docs.spring.io/spring-ai/reference/index.html">Spring AI</a> also has support for AI/ML the same way Langchain4j does. We'll explore those in the upcoming articles.</p>
<p>Happy Coding!</p>
]]></content:encoded></item><item><title><![CDATA[Marp: A Markdown Presentation App That Simplifies Your Tech Talks]]></title><description><![CDATA[In today's fast-paced tech world, giving effective presentations is crucial for conveying complex ideas and engaging audiences. While Markdown has emerged as a popular lightweight markup language for creating rich text documents, its use in creating ...]]></description><link>https://blog.teamnexus.in/marp-a-markdown-presentation-app-that-simplifies-your-tech-talks</link><guid isPermaLink="true">https://blog.teamnexus.in/marp-a-markdown-presentation-app-that-simplifies-your-tech-talks</guid><category><![CDATA[presentations]]></category><category><![CDATA[markdown]]></category><category><![CDATA[presentation skills]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Tue, 12 Mar 2024 16:00:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/oqStl2L5oxI/upload/3f5bef93beeec0b0f43ab2fd2b3ca96d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's fast-paced tech world, giving effective presentations is crucial for conveying complex ideas and engaging audiences. While Markdown has emerged as a popular lightweight markup language for creating rich text documents, its use in creating dynamic, interactive, and visually appealing presentations can be challenging. This is where Marp comes into the picture - an open-source Markdown presentation app that simplifies the process of creating engaging tech talks. In our <a target="_blank" href="https://blog.teamnexus.in/blog/2022/06/07/stunning-presentations-with-asciidoctor-and-revealjs/">earlier post</a>, we saw about how Asciidoctor can be used to create stunning presentations, and in this article it is <a target="_blank" href="https://daringfireball.net/projects/markdown/">markdown</a></p>
<h2 id="heading-what-is-marp">What is Marp?</h2>
<p><a target="_blank" href="https://marp.app/">Marp (Markdown Presentation Engine)</a> is a lightweight and flexible tool for creating interactive and visually appealing presentations using simple Markdown syntax. It was created based on the idea of combining the benefits of Markdown and Reveal.js, a popular HTML presentation framework. Marp supports the creation of presentations that can be rendered as static HTML, PDF, or PowerPoint</p>
<h2 id="heading-features">Features</h2>
<ol>
<li><strong>Simple Markdown syntax:</strong> Marp uses a straightforward Markdown format for creating slides, making it easy for developers and writers who are already familiar with Markdown.</li>
<li><strong>Interactive presentations:</strong> Marp supports the use of JavaScript and HTML to create interactive elements, such as quizzes, forms, and animations, to enhance user engagement.</li>
<li><strong>Live preview:</strong> Marp provides a live preview mode while editing, allowing you to see the changes in real-time and fine-tune your slides without leaving the editor.</li>
<li><strong>Directives and extended syntax:</strong> Marp supports a variety of directives and extended syntax (image syntax, math typesetting, auto-scaling, etc...)  to create beautiful slides, as sometimes, simple text content isn't enough to emphasize or represent the content - mathematical equations for example.</li>
<li><strong>Export to various formats:</strong> Marp supports exporting presentations in multiple formats such as HTML, PDF, and SVG, allowing you to share your content with a broader audience.</li>
<li><strong>Customizable themes:</strong> Marp offers several built-in themes to choose from or the option to create custom themes, enabling you to design presentations that align with your brand and style.</li>
<li><strong>Official Toolset:</strong> Marp provides an official toolset that has the <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=marp-team.marp-vscode">Visual Studio Code extension</a> and <a target="_blank" href="https://github.com/marp-team/marp-cli">Marp CLI</a> for command line usage.</li>
<li><strong>Pluggable Architecture:</strong> Marp ecosystem is based on the Marpit framework for creating HTML slides deck and has a pluggable architecture where the features can be extended by developers via plugins</li>
</ol>
<h2 id="heading-getting-started">Getting Started</h2>
<p>The recommended and the best option is to use the <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=marp-team.marp-vscode">Visual Studio Code extension</a> that is provided. The <a target="_blank" href="https://github.com/marp-team/marp-cli">Marp CLI</a>, the command line version, can also be used. However, one has to compile it to the required output everytime.</p>
<h2 id="heading-use-cases">Use Cases</h2>
<p>Marp is an ideal solution for developers, designers, educators, and anyone who needs to create engaging technical or informational presentations. Some common use cases include:</p>
<ol>
<li><strong>Software demos:</strong> Marp can be used to create presentations that showcase the features of a software product or application, with interactive elements that allow users to interact and explore the functionality.</li>
<li><strong>Educational materials:</strong> Marp is an excellent tool for creating engaging educational materials such as tutorials, workshop guides, or study resources, with the ability to add code snippets, diagrams, and other multimedia content.</li>
<li><strong>Technical talks:</strong> Marp simplifies the process of creating technical talks and workshops by offering a lightweight and flexible presentation engine that can handle complex content, interactive elements, and customizable styles.</li>
<li><strong>Data visualizations:</strong> Marp supports integrations with libraries like D3.js and Plotly, making it an ideal choice for data scientists and researchers who need to present complex data visualizations in an engaging and accessible format.</li>
</ol>
<h2 id="heading-sample">Sample</h2>
<pre><code class="lang-markdown">
---
theme: gaia
<span class="hljs-emphasis">_class: lead
paginate: true
backgroundColor: #fff
backgroundImage: url('https://marp.app/assets/hero-background.svg')
---

![<span class="hljs-string">bg left:40% 80%</span>](<span class="hljs-link">https://marp.app/assets/marp.svg</span>)

# <span class="hljs-strong">**Marp**</span>

Markdown Presentation Ecosystem

https://marp.app/

---

# How to write slides

Split pages by horizontal ruler (`---`). It's very simple! :satisfied:

---

# Slide 1

foobar

---

# Slide 2

foobar</span>
</code></pre>
<p>Marp offers a simple yet powerful solution for creating interactive and visually appealing technical presentations using Markdown syntax. With its live preview mode, customizable themes, and support for various export formats, it provides developers, educators, and presenters with an essential tool for engaging audiences and conveying complex ideas effectively. So, next time you need to create a presentation, consider giving Marp a try!</p>
]]></content:encoded></item><item><title><![CDATA[jsoup: A Powerful Java Library for Working With HTML and XML Documents]]></title><description><![CDATA[jsoup is a popular open-source Java library that enables developers to parse, manipulate, and extract data from HTML and XML documents. In this article, we will explore the basics of using jsoup, including parsing HTML documents, selecting and manipu...]]></description><link>https://blog.teamnexus.in/jsoup-a-powerful-java-library-for-working-with-html-and-xml-documents</link><guid isPermaLink="true">https://blog.teamnexus.in/jsoup-a-powerful-java-library-for-working-with-html-and-xml-documents</guid><category><![CDATA[Java]]></category><category><![CDATA[parsing]]></category><category><![CDATA[HTML]]></category><category><![CDATA[xml]]></category><category><![CDATA[xml-parsing]]></category><category><![CDATA[html-parsing]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Mon, 11 Mar 2024 09:16:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/bYiw48KLbmw/upload/7001c9696bb5faf789c636337c76d144.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://jsoup.org/">jsoup</a> is a popular open-source Java library that enables developers to parse, manipulate, and extract data from HTML and XML documents. In this article, we will explore the basics of using jsoup, including parsing HTML documents, selecting and manipulating elements, and updating content in HTML. We'll provide code snippets along the way to help illustrate its capabilities.</p>
<p>jsoup simplifies working with real-world HTML and XML. It offers an easy-to-use API for URL fetching, data parsing, extraction, and manipulation using DOM API methods, CSS, and xpath selectors.</p>
<p>jsoup website mentions that it implements the <a target="_blank" href="https://html.spec.whatwg.org/multipage/syntax.html">WHATWG HTML5</a> specification, and parses HTML to the same DOM as modern browsers.</p>
<ul>
<li>scrape and parse HTML from a URL, file, or string</li>
<li>find and extract data, using DOM traversal or CSS selectors</li>
<li>manipulate the HTML elements, attributes, and text</li>
<li>clean user-submitted content against a safelist, to prevent XSS attacks output tidy HTML</li>
<li>jsoup is designed to deal with all varieties of HTML found in the wild; from pristine and validating, to invalid tag-soup; jsoup will create a sensible parse tree.</li>
</ul>
<h2 id="heading-getting-started-with-jsoup">Getting Started with jsoup</h2>
<p>To begin using jsoup, you first need to add the library as a dependency in your project. If you are using Maven, include the following in your <code>pom.xml</code> file:</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.jsoup<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>jsoup<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>x.xx.x<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
</code></pre>
<p>where <code>x.xx.x</code> is the relevant version, as of this writing it is <code>1.15.3</code></p>
<p>or if you are using Gradle, include the following in your <code>build.gradle</code> file:</p>
<pre><code class="lang-groovy">implementation 'org.jsoup:jsoup:x.xx.x'
</code></pre>
<h2 id="heading-parsing-an-html-document">Parsing an HTML Document</h2>
<p>To parse an HTML document using jsoup, you can use the <code>jsoup.connect()</code> method followed by the URL of the HTML file or webpage you want to work with. Here's a simple example:</p>
<pre><code class="lang-java"><span class="hljs-keyword">import</span> org.jsoup.jsoup;
<span class="hljs-keyword">import</span> org.jsoup.nodes.Document;

<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">jsoupExample</span> </span>{
  <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Exception </span>{
    Document document = jsoup.connect(<span class="hljs-string">"https://www.example.com"</span>).get();

    <span class="hljs-comment">// Continue working with the parsed document</span>
  }
}
</code></pre>
<h2 id="heading-selecting-and-manipulating-elements">Selecting and Manipulating Elements</h2>
<p>jsoup provides several methods to select and manipulate elements in an HTML or XML document. For example, you can use the <code>select()</code> method to select elements based
on their tags or attributes, like this:</p>
<pre><code class="lang-java"><span class="hljs-keyword">import</span> org.jsoup.jsoup;
<span class="hljs-keyword">import</span> org.jsoup.nodes.Document;
<span class="hljs-keyword">import</span> org.jsoup.nodes.Element;
<span class="hljs-keyword">import</span> org.jsoup.select.Elements;

<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">jsoupExample</span> </span>{
  <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Exception </span>{
    Document document = jsoup.connect(<span class="hljs-string">"https://www.example.com"</span>).get();

    <span class="hljs-comment">// Select all 'h1' tags in the document</span>
    Elements h1Elements = document.select(<span class="hljs-string">"h1"</span>);

    <span class="hljs-comment">// Update the content of the first 'h1' tag</span>
    <span class="hljs-keyword">for</span> (Element h1 : h1Elements) {
      h1.text(h1.text().replaceAll(<span class="hljs-string">"old"</span>, <span class="hljs-string">"new"</span>));
    }
  }
}
</code></pre>
<h2 id="heading-updating-content-in-html">Updating Content in HTML</h2>
<p>In addition to selecting and manipulating elements, you can also update the content of individual elements or the entire document using various methods provided by
jsoup. For example:</p>
<pre><code class="lang-java"><span class="hljs-keyword">import</span> org.jsoup.jsoup;
<span class="hljs-keyword">import</span> org.jsoup.nodes.Document;
<span class="hljs-keyword">import</span> org.jsoup.nodes.Element;

<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">jsoupExample</span> </span>{
  <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> <span class="hljs-keyword">throws</span> Exception </span>{
    Document document = jsoup.connect(<span class="hljs-string">"https://www.example.com"</span>).get();

    <span class="hljs-comment">// Update the content of a specific element</span>
    Element header = document.selectFirst(<span class="hljs-string">"h1"</span>);
    <span class="hljs-keyword">if</span> (header != <span class="hljs-keyword">null</span>) {
      header.text(<span class="hljs-string">"New Header"</span>);
    }

    <span class="hljs-comment">// Update the entire document's content</span>
    String newContent = <span class="hljs-string">"This is the updated content."</span>;
    document.body().html(newContent);
  }
}
</code></pre>
<p>jsoup is a powerful Java library for working with HTML and XML documents, enabling developers to parse, extract data, and manipulate elements efficiently. By using jsoup's simple yet effective APIs, you can save time and effort while producing cleaner, more maintainable code. It can effectively be used in content scraping (of course, without violating any policies or legal requirements) or editing and manipulating the documents in the document store or archive. Happy Coding!</p>
]]></content:encoded></item><item><title><![CDATA[Guided Tours Solution for Your Web Application]]></title><description><![CDATA[Creating a user-friendly and engaging onboarding experience is crucial for ensuring that new users can effectively navigate any web application. Guided tours are an excellent way to help new users familiarize themselves with the various features of t...]]></description><link>https://blog.teamnexus.in/guided-tours-solution-for-your-web-application</link><guid isPermaLink="true">https://blog.teamnexus.in/guided-tours-solution-for-your-web-application</guid><category><![CDATA[library]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Wed, 06 Mar 2024 14:30:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/CelYLE6Zvro/upload/bfe30972bb9acbe2eaddadc5f83b10d4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Creating a user-friendly and engaging onboarding experience is crucial for ensuring that new users can effectively navigate any web application. Guided tours are an excellent way to help new users familiarize themselves with the various features of the application. In this article, we will compare five popular JavaScript libraries used for building guided tours in web applications: Shepherd, Bootstrap Tour, driver.js, Tour Guide JS, and Intro.js.</p>
<h2 id="heading-shepherd">Shepherd</h2>
<p><a target="_blank" href="https://shepherdjs.dev/">Shepherd</a> is a powerful and customizable open source JavaScript library for creating interactive tours and onboarding experiences in web applications. It uses another open source library <a target="_blank" href="https://floating-ui.com/">Floating UI</a> to render the dialog tours. It offers a simple setup process, dynamic content support, the ability to create custom actions and events, and theming and styling too. More importantly, it is responsive too and never goes offscreen on smaller devices. <a target="_blank" href="https://shepherdjs.dev/">Shepherd</a> also provides excellent documentation and support, making it a popular choice among developers.</p>
<h2 id="heading-bootstrap-tour">Bootstrap Tour</h2>
<p><a target="_blank" href="https://bootstraptour.com/">Bootstrap Tour</a> is another open source JavaScript library that focuses on creating guided tours for web applications. It integrates seamlessly with the Bootstrap framework. It offers a wide range of features</p>
<ul>
<li>customizable steps </li>
<li>keyboard navigation</li>
<li>progress indicators</li>
<li>page navigation</li>
<li>automatic step progressing</li>
<li>interactive step progress e.g. progress when the user clicks on the page element</li>
</ul>
<p>The library hasn\'t been updated since a long time, yet the features are still great!</p>
<h2 id="heading-driverjs">driver.js</h2>
<p><a target="_blank" href="https://driverjs.com/">driver.js</a> is an open source JavaScript library designed for creating Product tours, highlights, contextual help aand feature adoption. Due to its extensive API, driver.js can be used for a wide range of use cases that includes </p>
<ul>
<li>Onboarding the users by explaining how to use the product and answer common questions</li>
<li>Remove Distractions with the highlights feature and focus user attention on what matters.</li>
<li>Provide contextual help for users</li>
<li>Highlight new features and make sure users don't miss them.</li>
<li>Works on Mobile devices as well.</li>
</ul>
<p><a target="_blank" href="https://driverjs.com/">driver.js</a> provides different flavours for its features with demo examples showcased in the site. The list has</p>
<ul>
<li>Animated tours </li>
<li>Non-animated tours </li>
<li>Async tours</li>
<li>Tours with Progress</li>
<li>Overlay styling and more</li>
<li>Theming </li>
</ul>
<p><a target="_blank" href="https://driverjs.com/">driver.js</a> is actively developed and has a very good documentation.</p>
<h2 id="heading-tour-guide-js">Tour Guide JS</h2>
<p><a target="_blank" href="https://tourguidejs.com/">Tour Guide JS</a> is a lightweight open source library for creating guided tours in web applications. It offers the following features</p>
<ul>
<li>It is framework agnostic</li>
<li>Has Typescript support</li>
<li>Like <a target="_blank" href="https://shepherdjs.dev/">Shepherd</a> this too uses <a target="_blank" href="https://floating-ui.com/">Floating UI</a> for navigation</li>
<li>Provides a lot of options for customization</li>
<li>Has extensive documentation</li>
<li>Also supports npm and mentions it to be better that using it directly</li>
</ul>
<h2 id="heading-introjs">Intro.js</h2>
<p><a target="_blank" href>Intro.js</a> like the others offers a rich set of features such as customizable steps and tooltips, keyboard navigation, theming, progress indicators and more. Like others, this library also has extensive documentation. Intro.js has open source licence under AGPL v3 and a commercial licence with different price plans</p>
<p>Take a look at these libraries and happy coding your next web application with one of these!</p>
]]></content:encoded></item><item><title><![CDATA[Datafaker: Simplifying Test Data Generation for Java and Kotlin]]></title><description><![CDATA[In the world of software development, effective testing is crucial to ensure the reliability and functionality of applications. A significant aspect of robust testing is the use of representative and reliable test data. [Datafaker][1], a powerful lib...]]></description><link>https://blog.teamnexus.in/datafaker-simplifying-test-data-generation-for-java-and-kotlin</link><guid isPermaLink="true">https://blog.teamnexus.in/datafaker-simplifying-test-data-generation-for-java-and-kotlin</guid><category><![CDATA[testdata]]></category><category><![CDATA[Java]]></category><category><![CDATA[library]]></category><category><![CDATA[mock]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Mon, 04 Dec 2023 15:27:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Wpnoqo2plFA/upload/bdbc4e0edf6d8c1db7bf9e7a55306a28.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<pre><code class="lang-markdown">
In the world of software development, effective testing is crucial to ensure the reliability and functionality of applications. A significant aspect of robust testing is the use of representative and reliable test data. [<span class="hljs-string">Datafaker</span>][<span class="hljs-symbol">1</span>], a powerful library for Java and Kotlin, simplifies the process of generating test data. In this article, we'll explore Datafaker and provide code examples for both Java and Kotlin, using the Maven coordinates to get started.

<span class="hljs-section">## What Is Datafaker?</span>

Datafaker is a Java and Kotlin library designed to streamline test data generation. It offers a user-friendly interface that makes creating mock data a breeze. Whether you need to generate test data for a database, API endpoints, or other testing purposes, Datafaker is the tool of choice for simplified data generation. It can be used to generate fake data for a variety of purposes, such as:

<span class="hljs-bullet">*</span> Testing software
<span class="hljs-bullet">*</span> Creating training data for machine learning models
<span class="hljs-bullet">*</span> Anonymizing data
<span class="hljs-bullet">*</span> Generating mock data for presentations

<span class="hljs-section">## Key Features of Datafaker</span>

Datafaker boasts a range of features that make it an indispensable tool for developers and testers:

<span class="hljs-bullet">1.</span> <span class="hljs-strong">**Variety of Data Types:**</span> Datafaker supports a wide array of data types, including names, addresses, phone numbers, emails, dates, numbers, and more. This versatility ensures you can generate diverse test data for different use cases.

<span class="hljs-bullet">2.</span> <span class="hljs-strong">**Fake Data Providers:**</span> Datafaker has many providers (233 as of 2.0.0) that are grouped under the following groups.

<span class="hljs-bullet">-</span> Base (Providers of everyday data)
<span class="hljs-bullet">-</span> Entertainment (Providers for movies, shows, books)
<span class="hljs-bullet">-</span> Food (Providers for different types of food)
<span class="hljs-bullet">-</span> Sport (Providers for different types of sport)
<span class="hljs-bullet">-</span> Videogame (Video game providers)

<span class="hljs-bullet">3.</span> <span class="hljs-strong">**Customization:**</span> You have the power to customize data generation by setting specific constraints or formats implementing a Data Provider. For instance, you can define date formats, create data within a specific range, or adhere to specific patterns.

<span class="hljs-bullet">4.</span> <span class="hljs-strong">**Multiple locales:**</span> Datafaker allows us to create multiple locales and also mix them easily with other locale data. The easiest way to do so is to create a Faker per locale, and mix between those fakers.

<span class="hljs-bullet">5.</span> <span class="hljs-strong">**Repeatable random results:**</span> To generate a more predictable and repeatable data, we can provide a seed, and the instantiation of Fake objects will always happen in a predictable way, which can be handy for generating results multiple times.

<span class="hljs-bullet">6.</span> <span class="hljs-strong">**Bulk Data Generation:**</span> Datafaker allows for bulk data generation, making it easy to create extensive datasets for comprehensive testing. These bulk generations can be returned as a Java Collection or Java Streams, however the test needs it.

<span class="hljs-bullet">7.</span> <span class="hljs-strong">**Export/Transform Data:**</span> The generated data can be easily exported/transformed in multiple formats, such as XML, JSON, CSV, and SQL, ensuring compatibility with various testing and development environments.

There are other similar projects like [<span class="hljs-string">Java Faker</span>][<span class="hljs-symbol">2</span>], [<span class="hljs-string">Kotlin Faker</span>][<span class="hljs-symbol">3</span>], [<span class="hljs-string">JFairy</span>][<span class="hljs-symbol">4</span>] which provide similar functionality, however, Datafaker is quite active.

Now, let's dive into code examples to illustrate how Datafaker can be used for test data generation.

<span class="hljs-section">## Code Examples</span>

To get started with Datafaker in Java, add the following Maven dependency:

<span class="hljs-code">```xml
&lt;dependency&gt;
    &lt;groupId&gt;net.datafaker&lt;/groupId&gt;
    &lt;artifactId&gt;datafaker&lt;/artifactId&gt;
    &lt;version&gt;2.0.2&lt;/version&gt;
&lt;/dependency&gt;
```</span>

Now, let's look at how you can generate random names, email addresses and phone numbers in Java:

<span class="hljs-code">```java
// Java Example
import net.datafaker.Faker;

public class TestDataGeneration {
    public static void main(String[] args) {
        Faker faker = new Faker();

        // Generate random names
        String firstName = faker.name().firstName();
        String lastName = faker.name().lastName();
        String fullName = faker.name().fullName();
        String email = faker.internet().emailAddress();
        String phone = faker.phoneNumber().phoneNumber();

        // Generate a collection of names
        List&lt;String&gt; names = faker.collection(
            () -&gt; faker.name().firstName(), 
            () -&gt; faker.name().lastName())
        .len(10)
        .generate();

    }
}
```</span>

This Java code snippet generates a firstName, a lastName, a fullName, an email and a phoneNumber. Followed by generating a collection of 10 names using two Suppliers where one Supplier provides the firstName and the other the lastName.

The bulk generation can be returned as Streams as well, like in the following code snippet

<span class="hljs-code">```java
Stream&lt;String&gt; names = 
    faker.stream(
            () -&gt; faker.name().firstName(), 
            () -&gt; faker.name().lastName())
        .len(10)
        .generate();
```</span>

Datafaker also provides a number of features for generating more complex data. For example, you can use Datafaker to generate fake data for:

<span class="hljs-bullet">-</span> Addresses
<span class="hljs-bullet">-</span> Companies
<span class="hljs-bullet">-</span> Credit cards
<span class="hljs-bullet">-</span> Dates and times
<span class="hljs-bullet">-</span> Locations
<span class="hljs-bullet">-</span> Products
<span class="hljs-bullet">-</span> Services
<span class="hljs-bullet">-</span> Vehicles

To generate more complex data, you can use Datafaker's providers. Providers are classes that generate fake data for a specific type of data. For example, the <span class="hljs-code">`Address`</span> provider can generate fake addresses, while the <span class="hljs-code">`Company`</span> provider can generate fake companies.

Here is an example of how to use Datafaker's <span class="hljs-code">`Company`</span> provider to generate a fake company profile:

<span class="hljs-code">```java
import net.datafaker.Faker;

public class Example {
    public static void main(String[] args) {
        Faker faker = new Faker();

        String name = faker.company().name();
        String catchPhrase = faker.company().catchPhrase();
        String website = faker.internet().domainName();

        System.out.println("Name: " + name);
        System.out.println("Catch phrase: " + catchPhrase);
        System.out.println("Website: " + website);
    }
}
```</span>

This code will generate a fake company profile with a random name, catch phrase, and website.

For Kotlin, the code is more or less similar except for the Kotlin constructs. 

Efficient testing requires reliable and representative test data, and Datafaker excels at this task. With its intuitive interface and wide array of data generation capabilities, Datafaker proves to be a valuable tool for both developers and testers. Whether you need to generate names, addresses, user data, or any other type of test data, Datafaker is your trusted companion. Give it a try, and experience how it streamlines the testing process, saving you time and effort in the long run.

To get started with Datafaker, you can find it on Maven Central using the following Maven coordinates:

<span class="hljs-code">```xml
&lt;dependency&gt;
    &lt;groupId&gt;net.datafaker&lt;/groupId&gt;
    &lt;artifactId&gt;datafaker&lt;/artifactId&gt;
    &lt;version&gt;2.0.2&lt;/version&gt;
&lt;/dependency&gt;
```</span>

The documentation in the official website is also comprehensive. Please read it to understand the wide range of options it provides.

[<span class="hljs-symbol">1</span>]: <span class="hljs-link">https://www.datafaker.net/ "Datafaker"</span>
[<span class="hljs-symbol">2</span>]: <span class="hljs-link">http://dius.github.io/java-faker/ "Java Faker"</span>
[<span class="hljs-symbol">3</span>]: <span class="hljs-link">https://serpro69.github.io/kotlin-faker/ "Kotlin Faker"</span>
[<span class="hljs-symbol">4</span>]: <span class="hljs-link">https://devskiller.github.io/jfairy/ "JFairy"</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Discovering "Everything": A Powerful File Search Tool]]></title><description><![CDATA[Imagine a world where you can instantly find any file on your computer with just a few keystrokes. No more endless clicking through folders or searching through a cluttered desktop. Welcome to the world of "Everything" - a remarkable file search tool...]]></description><link>https://blog.teamnexus.in/discovering-everything-a-powerful-file-search-tool</link><guid isPermaLink="true">https://blog.teamnexus.in/discovering-everything-a-powerful-file-search-tool</guid><category><![CDATA[Productivity]]></category><category><![CDATA[tools]]></category><category><![CDATA[search]]></category><category><![CDATA[Windows]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Fri, 20 Oct 2023 17:59:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/d9ILr-dbEdg/upload/f39c24177d28430828609ef8ceaf0405.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine a world where you can instantly find any file on your computer with just a few keystrokes. No more endless clicking through folders or searching through a cluttered desktop. Welcome to the world of "Everything" - a remarkable file search tool that is lean in size and on the resources it uses.</p>
<p>"Everything" is a free, lightweight, portable and lightning-fast file search utility for Windows. Developed by David Carpenter, Voidtools. This gem of a software is designed to make file searching as efficient and effortless as possible. Moreover, it is portable too, no installation necessary! Let's look at some of the key features</p>
<p><strong>Key Features:</strong></p>
<ol>
<li><p><strong>Instant Search:</strong> The hallmark feature of "Everything" is its speed. As soon as you start typing in the search bar, it displays results in real-time. It's like having Google Search for your local files.</p>
</li>
<li><p><strong>Light on Resources:</strong> Despite its powerful capabilities, "Everything" is incredibly light on system resources. It won't slow down your computer, even when indexing large drives.</p>
</li>
<li><p><strong>Real-time updates:</strong> Changes to the file system, like adding new files, deleting files updates the index in real-time</p>
</li>
<li><p><strong>Regex Support</strong> For advanced users, "Everything" supports regular expressions, allowing you to create complex search queries with ease.</p>
</li>
<li><p><strong>Content Search:</strong> It has the ability to search within the contents of the files too!</p>
</li>
<li><p><strong>Quick Access:</strong> You can use custom keyboard shortcuts to access "Everything" from anywhere, enhancing your productivity.</p>
</li>
<li><p><strong>Easy to use:</strong> Everything has a simple and intuitive user interface.</p>
</li>
</ol>
<p><strong>How to Get Started:</strong></p>
<ol>
<li><p><strong>Download and Install:</strong> Visit the <a target="_blank" href="https://www.voidtools.com">official Voidtools website</a> to download and install "Everything". You can choose between an installer and a portable version too!</p>
</li>
<li><p><strong>Index Your Drives:</strong> After installation, "Everything" will index the files on your drives. This initial process might take some time, but the payoff is worth it.</p>
</li>
<li><p><strong>Start Searching:</strong> Once indexing is complete, simply type your query into the search bar, and watch as "Everything" presents you with instant, accurate results.</p>
</li>
</ol>
<p><strong>Use Cases:</strong></p>
<ul>
<li><p><strong>Organizing Files:</strong> Easily locate and manage files, even if you can't remember where you saved them.</p>
</li>
<li><p><strong>Quick Document Retrieval:</strong> Find important documents, presentations, or spreadsheets in seconds.</p>
</li>
<li><p><strong>Cleaning Up Clutter:</strong> Identify and remove duplicate files or old documents you no longer need.</p>
</li>
<li><p><strong>Efficient Workflows:</strong> Streamline your workflow by accessing files rapidly, saving you valuable time.</p>
</li>
</ul>
<p>In a world where digital clutter can overwhelm us, "Everything" comes to the rescue as an indispensable tool for file management and organization. Give it a try, and you'll wonder how you ever managed without it. Download "Everything" today and take control of your digital world like never before.</p>
]]></content:encoded></item><item><title><![CDATA[Story Points - Intent and Effectiveness]]></title><description><![CDATA[In some companies, "Story Points" have the following two purposes.

A convoluted measure for customer billing, the customer won't really know how much effort has been put up for the said amount of story points in the bill.

A number by which the deve...]]></description><link>https://blog.teamnexus.in/story-points-intent-and-effectiveness</link><guid isPermaLink="true">https://blog.teamnexus.in/story-points-intent-and-effectiveness</guid><category><![CDATA[agile]]></category><category><![CDATA[Scrum]]></category><category><![CDATA[project management]]></category><category><![CDATA[storypoints]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Sat, 23 Sep 2023 08:45:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/a53bWJk1sz0/upload/701a086c1824ca51680d265b1173e736.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In some companies, "Story Points" have the following two purposes.</p>
<ul>
<li><p>A convoluted measure for customer billing, the customer won't really know how much effort has been put up for the said amount of story points in the bill.</p>
</li>
<li><p>A number by which the developer has no idea on how much task he is taking</p>
</li>
</ul>
<p>Essentially, story points are an abstract way to define the complexity of a task at hand, usually denoted in terms of the numbers in the Fibonacci Series (natural numbers are acceptable too). The higher the number, the complex the task; so, if the number is large, it implies that it will have to be broken down into smaller manageable pieces.</p>
<p>Story points was intended to help the developers provide an effort estimate of the tasks without committing to the number of days/hours which is usually hard or not accurate. However, it has turned out in a way that it masks a direct way to calculate the effort that was involved in Time &amp; Material Projects.</p>
<p>It is also to help in building a clear and expressive burnout charts that show how big the tasks were that the team had worked on.</p>
<p>Sometimes, the story points are twisted so much that the stories in them are just made up and the points don't really matter. Another problem with story points is that the management would expect the team to achieve the same amount of story points or more in each sprint, under the name of "increased productivity", even if they are understaffed. On the contrary, the team inflates the story points just to level it with their previous sprints, so that they will not be questioned by the management. Hence, neither the manager nor the team would be able to tell the exact amount of work done. They will have to settle down with the figures knowing each one is cheating the other.</p>
<p>Some interesting dialogues we usually hear on story points.</p>
<p><em>"The customer won’t agree for that many story points bring it down further…"</em></p>
<p><em>"Do the story points include the testing effort?"</em></p>
<p><em>"Please move the incomplete story points to the next sprint..."</em></p>
<p><strong><em>Excerpt from my book -</em></strong> <a target="_blank" href="https://notionpress.com/read/excel-in-it"><strong><em>Excel in IT</em></strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Code-athon, Code marathon or Hackthon: Is it good?]]></title><description><![CDATA[In the ever-evolving landscape of tech-driven workplaces, companies are increasingly adopting unique approaches to foster innovation, collaboration, and rapid problem-solving.
One such approach gaining popularity is the code-athon, code marathon or h...]]></description><link>https://blog.teamnexus.in/code-athon-code-marathon-or-hackthon-is-it-good</link><guid isPermaLink="true">https://blog.teamnexus.in/code-athon-code-marathon-or-hackthon-is-it-good</guid><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[practice]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Fri, 08 Sep 2023 15:43:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/npxXWgQ33ZQ/upload/92bba60a473ec01b541a2cbea6390a53.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the ever-evolving landscape of tech-driven workplaces, companies are increasingly adopting unique approaches to foster innovation, collaboration, and rapid problem-solving.</p>
<p>One such approach gaining popularity is the code-athon, code marathon or hackathon — a structured event where teams come together in the office for a concentrated burst of coding, creativity, and camaraderie. Typically, this happens over a duration of 48 to 72 hours at a stretch.</p>
<p>In some companies, this has become more of a monthly or a bimonthly ritual. These code marathons definitely have a benefit but they also have a disadvantage.</p>
<p>While these events have their merits, they also come with their own set of challenges. Let's see the pros and cons of these code marathons</p>
<p><strong>Pros:</strong></p>
<ol>
<li><p><strong>Innovation Boost:</strong> Code-athons encourage teams to think outside the box, experiment with new ideas, and innovate quickly. These bursts of creativity can lead to groundbreaking solutions.</p>
</li>
<li><p><strong>Team Building:</strong> Bringing teams together for intensive coding sessions fosters collaboration and strengthens team bonds. It promotes a sense of belonging and shared achievement.</p>
</li>
<li><p><strong>Accelerated Development:</strong> Code-athons are excellent for rapidly developing and prototyping new features or products, helping companies stay competitive in fast-paced markets.</p>
</li>
<li><p><strong>Skill Enhancement:</strong> Participants often learn new skills, tools, and technologies during these events, which can benefit both their personal growth and the company's technological prowess.</p>
</li>
<li><p><strong>Problem Solving:</strong> Code-athons provide a platform to address complex problems that may have been lingering, offering fresh perspectives and innovative solutions.</p>
</li>
</ol>
<p><strong>Cons:</strong></p>
<ol>
<li><p><strong>Burnout Risk:</strong> Code-athons at regular intervals can lead to burnout if not managed properly. Intense, recurring events may cause fatigue and negatively impact overall productivity.</p>
</li>
<li><p><strong>Quality vs. Speed:</strong> The emphasis on speed and deadlines may prioritize quantity over quality. Rushed code can lead to technical debt and long-term maintenance challenges.</p>
</li>
<li><p><strong>Inclusivity Concerns:</strong> Not all team members may thrive in such high-pressure environments. Code-athons can unintentionally exclude those who work better at a steady, sustainable pace.</p>
</li>
<li><p><strong>Sustainability:</strong> Maintaining the frequency of code-athons can be challenging in the long run. Teams may struggle to sustain enthusiasm and participation.</p>
</li>
<li><p><strong>Resource Allocation:</strong> These events require time and resources, potentially diverting focus from ongoing projects and strategic initiatives.</p>
</li>
</ol>
<p>Doing it once in a while definitely has its benefits, but making it a regular ritual results in the team getting exhausted, become less motivated and counter-productive.</p>
<p>In conclusion, code-athons can be a powerful tool for boosting innovation and team cohesion. However, they should be approached with care and consideration for their potential downsides. Striking the right balance between regular code-athons and everyday work is key. Ultimately, the success of such rituals lies in the company's ability to manage the intensity, promote a culture of inclusivity, and ensure that the outcomes align with the organization's long-term goals. When done thoughtfully, code-athons can be a driving force behind a company's innovation and growth, however, overdoing it can become detrimental to the entire organization.</p>
]]></content:encoded></item><item><title><![CDATA[TypeScript: To be or not to be]]></title><description><![CDATA[Recently, there has been a noticeable trend where some individuals and organizations have announced their decision to drop TypeScript from their supported stack. Some have done it with a reason, but some want to do it for the drama. While it's entire...]]></description><link>https://blog.teamnexus.in/typescript-to-be-or-not-to-be</link><guid isPermaLink="true">https://blog.teamnexus.in/typescript-to-be-or-not-to-be</guid><category><![CDATA[TypeScript]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[programming languages]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Fri, 08 Sep 2023 15:39:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/JNz9bQD3Oio/upload/90c22f9010dfb99738e31e177736b3ef.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, there has been a noticeable trend where some individuals and organizations have announced their decision to drop <a target="_blank" href="https://www.typescriptlang.org/">TypeScript</a> from their supported stack. Some have done it with a reason, but some want to do it for the drama. While it's entirely valid for teams to reevaluate their technology choices, it's essential to emphasize the significance of constructive and well-informed discussions, especially when it comes to a versatile language like TypeScript.</p>
<p>Similarly, the once hailed jQuery was suddenly trampled like it was a sin to use jQuery in projects. This sort of a behaviour has been seen with other languages as well - Java, for example. People have been telling that <strong><em>"Java is dead"</em></strong> for a very long time; but Java has been growing strong ever since. Likewise, R programmers were being mocked that they are rare than white elephants. After the machine learning buzz took off, many started learning and also praised R.</p>
<p>Though I haven't used TypeScript extensively, I can see how the language filled the gap where JavaScript had a shortfall and how it evolved to being adopted by many frameworks and libraries. Here are some of my observations in the context of TypeScript:</p>
<p><strong>1. TypeScript's Strengths:</strong> TypeScript has gained popularity for its strong typing system, improved developer tooling, and enhanced code maintainability. It's essential to acknowledge these strengths and consider the reasons why it became a part of your tech stack in the first place.</p>
<p><strong>2. Guarding Against the "Sheep Mentality":</strong> It's crucial to guard against the "sheep mentality," where decisions are made without considering the pros and cons. Dropping TypeScript simply because others are doing so without thoughtful evaluation may lead to missed opportunities and challenges.</p>
<p><strong>3. Continuous Improvement:</strong> Like any technology, TypeScript continues to evolve. The TypeScript team actively listens to feedback and consistently releases updates to address concerns and enhance the language's capabilities. Engaging in discussions and contributing to its development can lead to positive changes.</p>
<p><strong>4. Project-Specific Considerations:</strong> Technology choices should align with the specific needs and goals of your projects. What works for one may not work for another. It's crucial to evaluate TypeScript within the context of your projects and make informed decisions accordingly.</p>
<p><strong>5. Migration Challenges:</strong> Abruptly dropping TypeScript can lead to migration challenges, increased development costs, and potential disruptions to ongoing projects. A well-thought-out transition plan, driven by constructive discussions, can help mitigate these challenges.</p>
<p><strong>6. Learning and Collaboration:</strong> TypeScript has a vibrant and supportive community. Engaging in respectful conversations, sharing experiences, and seeking advice from experienced TypeScript developers can enhance your team's knowledge and foster collaboration.</p>
<p>TypeScript, like any technology, deserves thoughtful consideration and open dialogue. While it's acceptable to reassess your tech stack, it's crucial to do so with respect for the language's strengths and an understanding of the broader context. Constructive conversations not only benefit your team but also contribute to the continuous improvement of TypeScript and the tech community as a whole. Let's remember that the tech world thrives when we approach challenges with curiosity, collaboration, and a willingness to learn, rather than following the herd blindly.</p>
]]></content:encoded></item><item><title><![CDATA[Prioritizing Tasks - Time Management]]></title><description><![CDATA[One of the important aspects of time management is to prioritize the tasks in hand. This is more of a daily routine one has to perform at the start of the day's work. However, for some prioritizing might be confusing or difficult. That's where the Pr...]]></description><link>https://blog.teamnexus.in/prioritizing-tasks-time-management</link><guid isPermaLink="true">https://blog.teamnexus.in/prioritizing-tasks-time-management</guid><category><![CDATA[prioritization]]></category><category><![CDATA[Time management]]></category><category><![CDATA[tasks]]></category><category><![CDATA[task management]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Tue, 26 Jul 2022 10:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1658772381154/_ih1tB5kb.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the important aspects of time management is to prioritize the tasks in hand. This is more of a daily routine one has to perform at the start of the day's work. However, for some prioritizing might be confusing or difficult. That's where the Priority Quadrant comes to the rescue. </p>
<p>The Priority Quadrant, also called the Eisenhower Important/Urgent Principle or Eisenhower Matrix, is a simple rule for classifying the tasks in hand. It is based on the Urgency and Criticality or Importance of the task. </p>
<p>Urgency implies whether the task needs to be addressed at one or the earliest possible</p>
<p>Criticality/Importance means completion of the task is significant to achieving the goal</p>
<p>The Priority Quadrant requires one to classify the tasks based on the following</p>
<ul>
<li>Urgent, Important</li>
<li>Urgent, Not-Important </li>
<li>Not-Urgent, Important</li>
<li>Not-Urgent, Not-Important</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1658772431722/jU3hH2Xxn.png" alt="priority-quadrant.png" /></p>
<p>The classifications themselves should be pretty self-explanatory. The following image illustrates on how to handle each of those category of tasks.</p>
<p>Sometimes, one may find tasks as all urgent and important or have difficulty in categorizing them. In such cases, it would be better to get the help of the superior, who assigned the tasks, to prioritize. </p>
<p>There are a lot of tools available for free that can be use to prioritize the tasks - Microsoft To-Do, Trello, Asana etc.</p>
<p>Initially, getting hang of the priority quadrant might seem a bit tough but over time, it makes life easier.</p>
]]></content:encoded></item><item><title><![CDATA[Kroki - Diagrams from textual descriptions!]]></title><description><![CDATA[There are a lot of tools that enable rendering of diagrams using text descriptions. The popular ones include

Mermaid - A diagramming and charting tool that uses Markdown-inspired text definitions. Supports drawing of Flowchart, Sequence Diagram, Gra...]]></description><link>https://blog.teamnexus.in/kroki-diagrams-from-textual-descriptions</link><guid isPermaLink="true">https://blog.teamnexus.in/kroki-diagrams-from-textual-descriptions</guid><category><![CDATA[tools]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[Diagram]]></category><category><![CDATA[mermaid]]></category><category><![CDATA[kroki]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Fri, 01 Jul 2022 10:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656594924610/xh-4xxiF7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are a lot of tools that enable rendering of diagrams using text descriptions. The popular ones include</p>
<ul>
<li><a target="_blank" href="http://mermaid-js.github.io/mermaid/#/">Mermaid</a> - A diagramming and charting tool that uses Markdown-inspired text definitions. Supports drawing of Flowchart, Sequence Diagram, Graphs, Class Diagrams, ER Diagrams and much more</li>
<li><a target="_blank" href="https://graphviz.org/">Graphviz</a> - A graph visualisation software to represent structural information as a diagram of abstract graphs and networks</li>
<li><a target="_blank" href="http://ditaa.sourceforge.net/">Ditaa</a> - a small command-line utility written in Java, that can convert diagrams drawn using ascii art </li>
<li><a target="_blank" href="https://plantuml.com">PlantUML</a> - A component that allows to write Sequence diagram, Usecase diagram, Class diagram, Object diagram, Activity diagram, Component diagram, Deployment diagram, State diagram, Timing diagram and more</li>
<li><a target="_blank" href="http://blockdiag.com/">BlockDiag</a> - A Group of projects that has other diagramming tools - BlockDiag, SeqDiag, ActDiag, NwDiag, PacketDiag, RackDiag</li>
</ul>
<p>The list goes on. Almost all of them are open source and free. Each of these tools are written in different programming languages like Mermaid is written in JavaScript, BlockDiag in Python, PlantUML in Java and so on.</p>
<p>Having all of them under a single roof does have its advantages, but setting them up and having the prerequisites can be a considerable task. That's where <a target="_blank" href="https://kroki.io">Kroki</a> comes to the rescue. It is both an open source software that can be installed locally and a service that is free as well.</p>
<p>It is astonishing to see the support for various diagramming tools all at one place. Kroki provides support for BlockDiag (BlockDiag, SeqDiag, ActDiag, NwDiag, PacketDiag, RackDiag), BPMN, Bytefield, C4 (with PlantUML), Ditaa, Erd, Excalidraw, GraphViz, Mermaid, Nomnoml, Pikchr, PlantUML, Structurizr, SvgBob, UMLet, Vega, Vega-Lite, WaveDrom. It is constantly updated to add more tools as well.</p>
<p>The beauty of Kroki is that it also provides HTTP APIs to create diagrams that can be accessed using tools like cURL. In addition to the services, it also has a <a target="_blank" href="https://docs.kroki.io/kroki/setup/kroki-cli/">CLI</a> - Command Line Interface - that uses <a target="_blank" href="https://kroki.io">kroki.io</a> as the backend.</p>
<p>Try Kroki for a happy diagramming!</p>
]]></content:encoded></item><item><title><![CDATA[Database of Databases]]></title><description><![CDATA[Databases are an integral part of any software application. Databases have played a critical role for a very long time in the history of computers and software.
Databases come in a wide variety solving different problems and scenarios, that include

...]]></description><link>https://blog.teamnexus.in/database-of-databases</link><guid isPermaLink="true">https://blog.teamnexus.in/database-of-databases</guid><category><![CDATA[Databases]]></category><category><![CDATA[database]]></category><category><![CDATA[catalog]]></category><category><![CDATA[index]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Thu, 23 Jun 2022 11:12:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1655982667258/MXPJ68r_6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Databases are an integral part of any software application. Databases have played a critical role for a very long time in the history of computers and software.</p>
<p>Databases come in a wide variety solving different problems and scenarios, that include</p>
<ul>
<li><p>Relational Databases</p>
<ul>
<li><a target="_blank" href="https://postgresql.org">PostgreSQL</a> - Free &amp; Open source</li>
<li><a target="_blank" href="https://mysql.com">MySQL</a> - Free &amp; Open source</li>
<li><a target="_blank" href="https://mariadb.org">MariaDB</a> - Free &amp; Open source</li>
<li><a target="_blank" href="https://sqlite.org">SQLite</a> - Free, Embedded &amp; Open source</li>
<li><a target="_blank" href="https://h2database.com/">H2</a> - Free, Embedded &amp; Open source</li>
<li><a target="_blank" href="https://hsqldb.org/">HSQLDB</a> - Free, Embedded &amp; Open source</li>
<li><a target="_blank" href="https://duckdb.org/">DuckDB</a> - Free &amp; Open source</li>
<li><a target="_blank" href="https://oracle.com/">Oracle</a> - Commercial</li>
<li><a target="_blank" href="https://www.microsoft.com/en-in/sql-server/sql-server-2019">SQL Server</a> - Commercial</li>
</ul>
</li>
<li><p>NoSQL Databases</p>
<ul>
<li><a target="_blank" href="https://www.mongodb.com/">MongoDB</a></li>
<li><a target="_blank" href="https://couchdb.apache.org/">CouchDB</a></li>
<li><a target="_blank" href="https://cassandra.apache.org/">Cassandra</a></li>
</ul>
</li>
<li><p>Key / Value Stores</p>
<ul>
<li><a target="_blank" href="https://redis.io/">Redis</a></li>
<li><a target="_blank" href="http://memcached.org/">Memcached</a></li>
<li><a target="_blank" href="http://www.project-voldemort.com/voldemort/">Project Voldemort</a></li>
</ul>
</li>
<li><p>Multi-model &amp; Graph Databases</p>
<ul>
<li><a target="_blank" href="https://orientdb.org/">OrientDB</a></li>
<li><a target="_blank" href="https://www.arangodb.com/">ArangoDB</a></li>
<li><a target="_blank" href="https://neo4j.com/">Neo4J</a></li>
</ul>
</li>
</ul>
<p>A lot more and this list can go on. However, there is a comprehensive database of databases that are/were used so far. </p>
<p>Enter <a target="_blank" href="https://dbdb.io">dbdb.io</a>, the <em>Database of Databases</em> created and maintained by <a target="_blank" href="https://db.cs.cmu.edu/">Carnegie Mellon Database Group</a>. The catalog has entries of 841 databases (at the time of writing this article). It is fascinating to see the number of databases that have existed, the history about them and their technical details.</p>
<p>The list provided about is just a tip of the iceberg and the quite popular ones!</p>
<p>The site provides nice search feature. It also has leaderboards grouped under different parameters.</p>
<p>Visit the <em>"database of databases"</em> to learn more!</p>
]]></content:encoded></item><item><title><![CDATA[Repetitive Strain Injury - What is it & How to avoid]]></title><description><![CDATA[These days many jobs are office based and most of the people who work in these jobs sit in an office desk, usually, in front of a computer. As a result, they tend to sit there for hours together in the same posture or doing repeatedly the same moveme...]]></description><link>https://blog.teamnexus.in/repetitive-strain-injury-what-is-it-and-how-to-avoid</link><guid isPermaLink="true">https://blog.teamnexus.in/repetitive-strain-injury-what-is-it-and-how-to-avoid</guid><category><![CDATA[tools]]></category><category><![CDATA[Health,]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[rsi]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Tue, 14 Jun 2022 06:18:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1655187431512/Bpy-smLJp.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>These days many jobs are office based and most of the people who work in these jobs sit in an office desk, usually, in front of a computer. As a result, they tend to sit there for hours together in the same posture or doing repeatedly the same movements causing inflammation, pain and damage to the soft tissues, muscles, tendons etc. This leads to trigger finger (also called BlackBerry Thumb, Playstation Thumb &amp; Smartphone Thumb), tennis elbow and carpal tunnel (Rubik's Wrist or Raver's Wrist).</p>
<p>These effects/injuries are commonly referred to Repetitive Strain Injury (RSI). It is a gradual build up of damage to muscles and nerves that it tends to cause</p>
<ul>
<li>Pain</li>
<li>Tightness</li>
<li>Numbness</li>
<li>Tingling Sensation</li>
<li>Dull ache</li>
</ul>
<p>and similar symptoms.</p>
<p>It is common among people who have a sedentary job, sitting almost all day. This includes</p>
<ul>
<li>Programmers</li>
<li>People playing computer games for a long time</li>
<li>People on their Smartphone for a long time</li>
<li>Office Assistants</li>
<li>Office Managers</li>
<li>Call support engineers</li>
</ul>
<p>and the likes. The early people take remedial measures, when they start to see early symptoms will help avoid complications. Even better, is to use tools or software that help avoid RSI. There are many available but in this post we will look at a few that are free and cross-platform</p>
<p>Most of these RSI preventing software use the <a target="_blank" href="https://en.wikipedia.org/wiki/Pomodoro_Technique">Pomodoro Technique</a> - a time management method invented by Francesco Cirillo where work is split into smaller tasks separated by short breaks. All of these software have similar functionality of prompting for breaks at configured time intervals</p>
<h2 id="heading-workrave">Workrave</h2>
<p>First in the list is <a target="_blank" href="https://workrave.org/">Workrave</a>. A portable version is available from <a target="_blank" href="https://portableapps.com/apps/utilities/workrave_portable">PortableApps</a>. Workrave monitors mouse movements and keyboard typing and provides the following features</p>
<ul>
<li>Microbreaks - A short break every 10-20 mins typically for about 15-30 seconds.During a microbreak you can let go of keyboard and mouse, look away from the screen, and relax a bit.</li>
<li>Rest breaks - A break away from your computer every 1 to 2 hours typically for about 5-10 minutes.</li>
<li>Daily Limit - The amount of time you use your computer. When you have reached the configured daily computer usage limit, you are prompted to stop using the computer for the day.</li>
</ul>
<p>Workrave presents a gentle <code>Break warning</code> before the break starts</p>
<p>The beautiful feature is <code>Exercises</code> at the start of the restbreak. Each exercise takes about 30 seconds, and the number of exercises that are shown is configurable.</p>
<p>Workrave also provides a number of statistics like breaks taken, skipped etc.</p>
<h2 id="heading-breaktimer-app">BreakTimer App</h2>
<p>The next app is the <a target="_blank" href="https://github.com/tom-james-watson/breaktimer-app">BreakTimer App</a>, that is a cross platform app. The installation is simple and it sits in the system tray. It allows configuration of break frequency and break length. It also provides customization of work hours.</p>
<p>It shows a nice pop-up window or notification before the break and displays the configured break message. It does not have micro-breaks, rest-breaks or daily limit like Workrave but does the breaking timing aspect well. It doesn't gather any statistics, just plain simple break timer.</p>
<p>All breaks can be postponed by a preconfigured time and can also be skipped.</p>
<h2 id="heading-stretchly">Stretchly</h2>
<p>The last app in this post is <a target="_blank" href="https://github.com/hovancik/stretchly">Stretchly</a> which is also a cross-platform app. There is a portable option available, just download and extract it to a directory and run the application, as simple as that. Like the other two programs, this one also sits in the tray and displays notifications during breaks.</p>
<p>Like Workrave, Stretchly has minibreaks and long breaks, however, it neither has daily limits nor collects any statistics. It does its function in a plain simple manner - display break notifications.  One just 10 seconds (configurable) before the break and a bigger almost full screen notification.</p>
<p>Another nice feature is Stretchly suggests a small exercise or a workout during each break. The entire program is Apple macOS themed even in Microsoft Windows.</p>
<p>All three programs are quite good and do their jobs well. It is just a matter of preference on who likes what. But it is definitely great softwaree to avoid a bad conditions.</p>
<p>Try it yourself and stay healthy!</p>
]]></content:encoded></item><item><title><![CDATA[Stunning Presentations with Asciidoctor and RevealJS]]></title><description><![CDATA[In the previous post, we quickly saw the power of Asciidoctor, how it could enhance the documentation, writing and many other features.
One of the other great features is its ability to create stunning HTML presentations along with another utility ca...]]></description><link>https://blog.teamnexus.in/stunning-presentations-with-asciidoctor-and-revealjs</link><guid isPermaLink="true">https://blog.teamnexus.in/stunning-presentations-with-asciidoctor-and-revealjs</guid><category><![CDATA[tools]]></category><category><![CDATA[asciidoctor]]></category><category><![CDATA[presentations]]></category><category><![CDATA[revealjs]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Tue, 07 Jun 2022 16:57:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1654621032417/k6z57C1kR.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the <a target="_blank" href="https://blog.teamnexus.in/blog/2022/06/03/asciidoctor-a-writers-swiss-army-knife/">previous post</a>, we quickly saw the power of <a target="_blank" href="https://asciidoctor.org">Asciidoctor</a>, how it could enhance the documentation, writing and many other features.</p>
<p>One of the other great features is its ability to create stunning HTML presentations along with another utility called <a target="_blank" href="https://revealjs.com/">RevealJS</a></p>
<p>Below is the sample presentation that is an asciidoctor markup file. </p>
<pre><code>= Stunning Presentations
Prabhu R
:imagesdir: images
:title-slide-background-image: sea.jpg
:title-slide-transition: fade
:title-slide-transition-speed: fast
:experimental: <span class="hljs-literal">true</span>

== A Great Story

<span class="hljs-attr">image</span>::galaxy.jpg[background, size=<span class="hljs-string">'cover'</span>]
Press kbd:[s] <span class="hljs-keyword">for</span> Speaker View that displays notes  

[.notes]
--
* tell anecdote
* make a point
--

[transition=<span class="hljs-string">'convex'</span>]
== Transition Convex

This slide has a <span class="hljs-string">`convex`</span> effect

[background-color=<span class="hljs-string">"gray"</span>]
[transition=<span class="hljs-string">'zoom'</span>]
== Transition Zoom

This slide has a <span class="hljs-string">`zoom`</span> effect,

[background-color=<span class="hljs-string">"teal"</span>]
[transition=<span class="hljs-string">'zoom'</span>]
== Vertical Slides

This is a Vertical slide,  click kbd:[&amp;#x2193;] to see vertical slides

[background-color=<span class="hljs-string">"crimson"</span>]
[transition=<span class="hljs-string">'slide'</span>]
=== Vertical Slide <span class="hljs-number">1</span>

Vertical slide <span class="hljs-number">1</span>

[background-color=<span class="hljs-string">"brown"</span>]
[transition=<span class="hljs-string">'slide'</span>]
=== Vertical Slide <span class="hljs-number">2</span>

Vertical slide <span class="hljs-number">2</span>

[background-video=<span class="hljs-string">"orca.mp4"</span>,options=<span class="hljs-string">"loop,muted"</span>]
[transition=<span class="hljs-string">'concave'</span>]
== Background Video

Background looping video

[%notitle]
[transition=<span class="hljs-string">'concave'</span>]
== THE END

<span class="hljs-attr">image</span>::end.jpg[background, size=cover]
</code></pre><p>To see how this stunning it gets rendered, visit <a target="_blank" href="https://rprabhu.github.io/stunning-presentations/presentation.html">here</a></p>
<p>Pressing <kbd>Esc</kbd> shows the thumbnail view of all the slides. Clicking on any of the slides jumps to that slide </p>
<p>The complete source is available in <a target="_blank" href="https://github.com/rprabhu/stunning-presentations">GitHub</a>, and can be used as a starter template for your presentations</p>
<p>With a little of CSS knowledge more amazing effects can be brought it to the presentations. For more details look at the <a target="_blank" href="https://docs.asciidoctor.org/reveal.js-converter/latest/">asciidoctor-revealjs site</a></p>
<p><a target="_blank" href="https://bentolor.github.io/java9to13/#/">Benjamin Schmid</a> has an even more amazing presentation, a nice example of what asciidoctor and revealjs combination could produce.</p>
]]></content:encoded></item><item><title><![CDATA[Asciidoctor - A Writer's Swiss Army Knife]]></title><description><![CDATA[There are a lot of tools that are used by authors and writers when they write their content. However, some of the tools

have a steep learning curve
are expensive
get hard to maintain especially when there are more assets like images, diagrams and vi...]]></description><link>https://blog.teamnexus.in/asciidoctor-a-writers-swiss-army-knife</link><guid isPermaLink="true">https://blog.teamnexus.in/asciidoctor-a-writers-swiss-army-knife</guid><category><![CDATA[tools]]></category><category><![CDATA[documentation]]></category><category><![CDATA[writing]]></category><category><![CDATA[asciidoctor]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Fri, 03 Jun 2022 11:07:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1654254507210/3DxjqQ9zK.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are a lot of tools that are used by authors and writers when they write their content. However, some of the tools</p>
<ul>
<li>have a steep learning curve</li>
<li>are expensive</li>
<li>get hard to maintain especially when there are more assets like images, diagrams and videos</li>
<li>exporting to other formats suitable for web, ebook might require a lot of work</li>
<li>documenting software technical documentation gets cumbersome</li>
<li>hard to maintain and check the changes in a source control system</li>
</ul>
<p>Enter text-based markups! Recently text-based mark-ups became popular especially among the software technical documentation writers. That includes</p>
<ul>
<li><a target="_blank" href="https://daringfireball.net/projects/markdown/">Markdown</a></li>
<li><a target="_blank" href="https://docutils.sourceforge.io/rst.html">ReStructedText</a></li>
<li><a target="_blank" href="https://asciidoc.org/">Asciidoc</a></li>
</ul>
<p>Each one has their own pros and cons, but the beauty of each of them is that they are plain text file with some formatting that enables easy transformation to other formats like HTML, PDF etc. The most popular one is <em>Markdown</em> and the one that has the most features and less quirky is the <em>Asciidoc</em>.</p>
<p>The toolchain that brings power of Asciidoc is <a target="_blank" href="https://asciidoctor.org/">Asciidoctor</a>. It parses Asciidoc files and helps in converting them to various formats - HTML5, PDF, DocBook, ePub, man pages etc.</p>
<p>A typical Asciidoc file looks like</p>
<pre><code>= Hello, AsciiDoc!
Doc Writer &lt;doc@example.com&gt;

An introduction to http:<span class="hljs-comment">//asciidoc.org[AsciiDoc].</span>

== First Section

* item <span class="hljs-number">1</span>
* item <span class="hljs-number">2</span>
</code></pre><p>It gets transformed into HTML as</p>
<p></p><h1>Hello AsciiDoc!</h1>
Doc Writer <a href="mailto:doc@example.com">&lt;doc@example.com&gt;</a><br /><br /><p></p>
<p>An introduction to <a target="_blank" href="https://asciidoc.org/">AsciiDoc</a>.</p>
<h2 id="heading-first-section">First Section</h2>
<ul>
<li>item 1</li>
<li>item 2</li>
</ul>
<p>It allows code snippets to be embedded that are syntax highlighted too. Following is the Asciidoc block of Java code snippet and how it gets rendered</p>
<pre><code>[source,java]
----
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Java</span> </span>{
    public <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> main(<span class="hljs-built_in">String</span>[] args){
        System.out.println(<span class="hljs-string">"Hello World!"</span>)
    }
}
----
</code></pre><pre><code><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Java</span> </span>{
    public <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> main(<span class="hljs-built_in">String</span>[] args){
        System.out.println(<span class="hljs-string">"Hello World!"</span>)
    }
}
</code></pre><p>It also enables text-based diagramming like <em>Mermaid, Graphviz, BlockDiag</em> etc inside the text so that the output renders as a nice diagram. For example, the following mermaid diagram text within the block gets rendered as the diagram below.</p>
<pre><code>[mermaid]
----
graph LR
A --&gt; B
A --&gt; C
B --&gt; D
C --&gt; D
----
</code></pre><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1654254385462/GZiAqkajm.jpg" alt="mermaid.jpg" /></p>
<p>Asciidoctor is powerful in many ways because of the plugins in its ecosystem. The plugins enable transforming the content to different formats including PDF, ePub, Stunning HTML5 based presentations and more. We will talk about the presentations in a subsequent post.</p>
<p>Some of the features that Asciidoctor supports are</p>
<ul>
<li>Lists</li>
<li>Tables</li>
<li>Videos - Background &amp; Foreground</li>
<li>Images - Background &amp; Foreground</li>
<li>Admonitions</li>
<li>Keyboard Macros</li>
<li>Custom Styles</li>
<li>Automatic Table of Contents based on Headings</li>
</ul>
<p>Many of the features can be activated and customised using attributes</p>
<p>The most important and great feature is the ability to include other asciidoc files. This enables teams to work separately on each of the topics independently and when the final output is generated, Asciidoctor puts them all in the right order. </p>
<pre><code>= Program Documentation

<span class="hljs-attr">include</span>::topic1.adoc[]
<span class="hljs-attr">include</span>::topic2.adoc[]
.
.
.
include::topicn.adoc[]
</code></pre><p>As mentioned earlier, the final output could be a html document, pdf file, epub and much more.</p>
<p>It has many more features than what can be covered in this article. It is more of a publishing toolchain that is simple and easy to use.</p>
<p>Head on to <a target="_blank" href="https://asciidoctor.org/">Asciidoctor</a> page for more!</p>
<p>Happy writing!</p>
]]></content:encoded></item><item><title><![CDATA[Sleep and Be Productive]]></title><description><![CDATA[In the episode Night Terrors of The Star Trek: The Next Generation, the ship Enterprise and its crew go in search of missing science vessel USS Brattain in an uncharted binary star system and gets caught in a spatial phenomenon called The Tyken's Rif...]]></description><link>https://blog.teamnexus.in/sleep-and-be-productive</link><guid isPermaLink="true">https://blog.teamnexus.in/sleep-and-be-productive</guid><category><![CDATA[Productivity]]></category><category><![CDATA[health]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Tue, 31 May 2022 14:30:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1654007325805/Kas6a06ja.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the episode <a target="_blank" href="https://en.wikipedia.org/wiki/Night_Terrors_(Star_Trek:_The_Next_Generation)"><em><strong>Night Terrors</strong></em></a> of The <em><strong>Star Trek: The Next Generation</strong></em>, the ship <em><strong>Enterprise</strong></em> and its crew go in search of missing science vessel <em>USS Brattain</em> in an uncharted binary star system and gets caught in a spatial phenomenon called <em><strong>The Tyken's Rift</strong> - a massive rupture in space into which energy is absorbed</em>. The crew finds that they could escape the rift only by creating a tremendous explosion. Coincidentally, another telepathic alien species is also caught in the other side of the rift. Their telepathic messages cause the crew of the Enterprise to fail achieving REM (Rapid Eye Movement) Sleep. Consequentially, they become irritable and experience hallucinations. The two people who are unaffected - the ship's counsellor Deanna Troi and the second officer, Lt. Commander Data, an android - who work to escape the rift. Meanwhile, the episode beautifully explains the effects of sleep deprivation on one's psychology and physiology.</p>
<ul>
<li>The crew slowly starts to behave strangely and stops functioning normally  </li>
<li>They tend to quarrel and start internal fights</li>
<li>They have difficulties in remembering and doing even the simplest of things</li>
<li>They start hallucinating and go out of control.</li>
</ul>
<p>The story finally ends on how they escaped the rift and got back to normalcy.</p>
<p><em>Sleep deprivation</em> is one of the major reasons for people becoming chronically ill. People at a young age might not realise the effects immediately but it accumulates over the years and starts to show up as we age. The body could not take it any more that it weakens. As a result, there will be</p>
<ul>
<li>Anxiety</li>
<li>Anger / Irritability</li>
<li>Stress</li>
<li>Hypertension</li>
<li>Diabetes</li>
<li>Heart Related Ailments</li>
<li>Nervous issues</li>
<li>Loss of appetite</li>
<li>Heartburn / Acidity</li>
<li>Hormonal issues</li>
<li>And more</li>
</ul>
<p>Millions of years of evolution made the human race a diurnal species, meaning the ones that are active during the day. It was mostly obeyed until the invention of electricity and light. That's when humans broke their <a target="_blank" href="https://en.wikipedia.org/wiki/Circadian_rhythm"><em>circadian rhythm</em></a> by allowing to be awake 24/7. These days, the smartphones also attribute to the loss of sleep because people endlessly stare at their phones for a very long time.</p>
<p>Organizations started working in multiple shifts putting people even in night shifts. As a result, the diurnal natural cycle went for a toss. Moreover, prolonged violations of the circadian rhythm also adversely affects the other two rhythms - <a target="_blank" href="https://en.wikipedia.org/wiki/Ultradian_rhythm"><em>ultradian rhythm</em></a> (within a day) and <a target="_blank" href="https://en.wikipedia.org/wiki/Infradian_rhythm"><em>infradian rhythm</em></a> (more than a day) too.</p>
<p>In most people, the immediate indication or sign that the body is getting affected would be Heartburn / Acidity. Ignoring that and taking medication to suppress that will result in other side-effects over time.</p>
<p>Likewise, people lose their cognitive ability, find hard to focus/ concentrate or even remember the simplest of things. For example, you go to a grocery to buy some vegetables. Once there, you start thinking, "Why did I come here...? Ah, yes... Vegetables..."</p>
<p>It slowly starts to impair the productive work hours because people tend to become drowsy at unusual hours, say, like after eating lunch. It will be uncontrollable for them to stay awake then.</p>
<p>The effects are cumulative that, sometimes it goes to a point when you realize the problem and start to take remedial steps, it would already caused irreversible damage or condition to the body, say, a nervous system issue or a weaker heart etc.</p>
<p>The best way to avoid the effects and stay productive is get a good night sleep everyday. Let the whole system rejuvenate!</p>
<p>Sleep well and be productive!</p>
]]></content:encoded></item><item><title><![CDATA[Professional Software Development at Zero Cost]]></title><description><![CDATA[To run a professional software development team and have a clean development process requires the following minimal tools. Especially, small teams that write great software could do better if they have these tools when they develop software. That too...]]></description><link>https://blog.teamnexus.in/professional-software-development-at-zero-cost</link><guid isPermaLink="true">https://blog.teamnexus.in/professional-software-development-at-zero-cost</guid><category><![CDATA[software development]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[tools]]></category><category><![CDATA[free]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Prabhu R]]></dc:creator><pubDate>Thu, 26 May 2022 12:32:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1653568126560/fFypsA7Fx.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>To run a professional software development team and have a clean development process requires the following minimal tools. Especially, small teams that write great software could do better if they have these tools when they develop software. That too at zero cost! Yes, totally "Zero" cost.</p>
<ul>
<li>IDEs</li>
<li>Source Control</li>
<li>Project Management Tools</li>
<li>Communication</li>
<li>Knowledge-base</li>
</ul>
<p>It is great to know that these tools are available for zero cost.</p>
<h2 id="heading-ide-integrated-development-environment">IDE - Integrated Development Environment</h2>
<p>Nowadays, all software development companies know the need for IDE. IDEs come with a lot of features that include language support, syntax highlight, code completion, code suggestions, indentation and a lot more. Some of the popular IDEs are</p>
<ul>
<li><a target="_blank" href="https://code.visualstudio.com">Visual Studio Code</a></li>
<li><a target="_blank" href="https://eclipse.org">Eclipse</a></li>
<li><a target="_blank" href="https://netbeans.apache.org">Apache Netbeans</a></li>
</ul>
<p>These IDEs support multiple languages. There are language focused IDEs <a target="_blank" href="https://jetbrains.com/idea">IntelliJ Idea</a> for Java, <a target="_blank" href="https://atom.io">Atom</a> for Web, <a target="_blank" href="https://codeblocks.org">Codeblocks</a> for C++ and so on.</p>
<h2 id="heading-source-control">Source Control</h2>
<p>Source Control is the heart of any development team. It would be surprising if a team did not use one. Unlike the old centralized source control systems like CVS and SVN, the modern distributed source control systems like Git, Mercurial, Bazaar etc., offer a wide array of features that include workflows, feature branches and many more. Those features enable integration with CI/CD tools, containerization etc.</p>
<p>Some of the popular cloud providers of version control are</p>
<ul>
<li><a target="_blank" href="https://github.com">GitHub</a></li>
<li><a target="_blank" href="https://gitlab.com">Gitlab</a></li>
<li><a target="_blank" href="https://bitbucket.com">Bitbucket</a> - Free up to 5 users</li>
</ul>
<p>The following self-hosted softwares</p>
<ul>
<li><a target="_blank" href="https://rhodecode.com/">Rhodecode</a> - supports Git, Mercurial &amp; SVN</li>
<li><a target="_blank" href="https://gitea.io/en-us/">Gitea</a></li>
<li><a target="_blank" href="http://bazaar.canonical.com/en/">Bazaar</a></li>
</ul>
<p>There is one underrated distributed version control that provides a wide array of features - bug tracking, wiki, forum etc - in a self-contained single executable is <a target="_blank" href="https://fossil-scm.org/">Fossil SCM</a></p>
<h2 id="heading-project-management">Project Management</h2>
<p>Though <a target="_blank" href="https://github.com">GitHub</a> and <a target="_blank" href="https://gitlab.com">Gitlab</a> provide issue tracking, milestones etc., a complete project management system goes a long way, especially in following well-established methodologies like Agile, Scrum and Kanban. Most are commercial but for small teams they provide a free option. The following are quite popular</p>
<ul>
<li><a target="_blank" href="https://www.atlassian.com/software/jira">Atlassian JIRA</a> - free up to 10 users</li>
<li><a target="_blank" href="https://www.jetbrains.com/youtrack/">Jetbrains YouTrack</a> - free up to 10 users</li>
<li><a target="_blank" href="https://trac.edgewall.org/">Trac Project</a> - Free, Open Source, Self-hosted. Provides issue management, source control integration, wiki etc.</li>
<li><a target="_blank" href="https://redmine.org/">Redmine</a> - Free, Open Source, Self-hosted. Provides issue management, source control integration, wiki, forums etc.</li>
</ul>
<h2 id="heading-communication">Communication</h2>
<p>Remote working and Work-from-home being the norm these days. Having a common communication channel is essential. The following are the popular ones</p>
<ul>
<li><a target="_blank" href="https://slack.com">Slack</a></li>
<li><a target="_blank" href="https://zulip.com">Zulip</a></li>
<li><a target="_blank" href="https://www.zoho.com/cliq">Zoho Cliq</a></li>
<li><a target="_blank" href="https://teams.microsoft.com">Microsoft Teams</a></li>
</ul>
<h2 id="heading-knowledge-base">Knowledge base</h2>
<p>Having an organized knowledge base like Project internal documents, Requirements and other related information is crucial to any project. As the project progresses, people don't remember the details where each document is resulting in confusion. To avoid such chaos later, it is better to document and update the details in a documentation system such as a wiki. The source control and the project management systems mentioned earlier provide options for wikis, however, sometimes it is preferable to have them separate. In such cases, the following could be used</p>
<ul>
<li><a target="_blank" href="https://www.atlassian.com/software/confluence">Atlassian Confluence</a> - Free up to 10 users</li>
<li><a target="_blank" href="https://js.wiki/">WikiJS</a> - Free, Open Source, Self-hosted</li>
<li><a target="_blank" href="https://www.bookstackapp.com/">BookStack</a> - Free, Open Source, Self-hosted</li>
</ul>
<p>Having the right tools is one part of professional software development. The next part is to have the processes right that include having the right coding standards, Source control workflows and branching strategy, Development/Production Environments, Project Management process, Deployment Process and others.</p>
<p>We will see those in the upcoming articles. Happy Software Development till then!</p>
]]></content:encoded></item></channel></rss>