Deploying LLMs on Raspberry Pi Powered HMI: Talk with Intelligent Facilities

Introduction

In the modern era of technological advancement, both smart factories and smart buildings represent the forefront of innovation in their respective domains. A  smart facility, encompassing both manufacturing and structural management, leverages a blend of advanced technologies to optimize operations, enhance efficiency, and ensure seamless communication between systems and humans. By integrating the Internet of Things (IoT), artificial intelligence (AI), machine learning (ML), and big data analytics, these facilities create an interconnected environment that not only improves production processes and building operations but also ensures flexibility and sustainability. In this comprehensive ecosystem, automated processes and real-time data exchange drive the intelligent management of everything from manufacturing workflows to heating, ventilation, air conditioning (HVAC), lighting, and security, resulting in superior performance, enhanced comfort, and significant energy savings.

Local Deployment of Large Language Models

The advent of Large Language Models (LLMs) has opened new frontiers in artificial intelligence, with models now comprising billions of parameters and continuously evolving to meet diverse applications. While cloud-based AI services have become prevalent, deploying LLMs locally on personal or enterprise hardware offers significant advantages, particularly regarding privacy and data control. This approach enables users and organizations to process sensitive information and perform AI-driven analyses without relying on external servers, thereby keeping proprietary or confidential data securely within their own computing environments. Local deployment addresses many of the privacy concerns associated with cloud-based services, where data may transit through or be stored on third-party servers. However, selecting the appropriate hardware for running LLMs locally is crucial to achieving optimal performance and efficiency.

Raspberry Pi Powered HMI on Small Language Models

With the 8GB RAM available in the reTerminal DM, deploying small quantized language models becomes feasible within this HMI environment. After extensive research, we have determined suitable models that balance quality, size, and performance for deployment on the Raspberry Pi-powered reTerminal HMI.

LLaMA 2 7B Chat – GGUF

  • Parameters: 7 billion
  • Quantization: 4-bit in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits.
  • Model Size: 4.08GB
  • Memory Requirement: 6.58GB
  • Performance: This model offers balanced quality and medium size
 
  • Quantization: 4-bit in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits.
  • Model Size: 1.79GB
  • Memory Requirement: 4.29GB
  • Performance: With medium, balanced quality, Phi-2 is faster and more responsive for given prompts compared to LLaMA 2 7B Chat
 
  • Deployment: Specifically designed for edge devices with restricted memory and computational capacities.
  • Quantization: 4-bit, with a model size of 637 MB.
  • Performance: TinyLLaMA-1.1B provides fast responses suitable for less time constraint-time machine translation without internet connectivity. However, its output flexibility is less compared to the LLaMA 2 7B Chat and Phi-2 models.

Deploying these small quantized language models on the reTerminal DM enhances the capabilities of the Raspberry Pi-powered HMI by enabling robust, localized AI-driven functionalities. While each model has its strengths and trade-offs in terms of size, memory requirements, and response quality, the selection should be based on the specific application needs and performance criteria. For tasks requiring high quality and flexibility, LLaMA 2 7B Chat is suitable, while Phi-2 is preferable for faster response times. For ultra-compact and efficient edge deployments, This demo we use Phi-2 stands out as an optimal choice.

If your system has internet access, you can interact with various LLM services using API keys. Some APIs are available for free, such as Google Gemini and Meta AI’s LLaMA 3. By creating a Gemini AI API key, you can gain access to the Gemini API and other related services, as well as Meta AI’s LLaMA 3. Since these interactions occur over the internet, you can experience significantly faster response times compared to local LLMs. Integrating these cloud-based LLM services can enhance the capabilities of your system, providing rapid and sophisticated data analysis and decision support.

Technical Stack

NodeRed: Node-RED is an open-source, flow-based development tool designed for visual programming, particularly suited for integrating devices, APIs, and online services.Node-RED offers a user-friendly, browser-based interface that allows users to wire together various components and create complex workflows through a simple drag-and-drop mechanism. This visual approach significantly reduces the barrier to entry, enabling both developers and non-developers to automate processes and build IoT applications efficiently. Key benefits of Node-RED include its extensibility through a vast library of pre-built nodes, its ability to handle real-time data processing and its support for a wide range of protocols and devices. This makes Node-RED a powerful and versatile tool for developing smart solutions in diverse environments such as smart buildings and factories, enhancing productivity and enabling rapid prototyping and deployment.

MariaDB: MariaDB is a robust, open-source relational database management system . Known for its high performance, reliability, and security, MariaDB is an excellent choice for handling large volumes of data and complex queries efficiently. It supports a wide array of storage engines, advanced clustering, and is fully compatible with MySQL, making it a versatile solution for various database needs. A significant advantage of MariaDB is its ability to be seamlessly deployed on Linux environments, including the Raspbian OS used by the reTerminal DM.

FlowFuse: FlowFuse is an innovative tool that enhances the capabilities of Node-RED by providing an intuitive set of nodes specifically designed for creating data-driven dashboards and visualizations. Their latest offering, Dashboard 2.0, simplifies the process of building interactive and visually appealing user interfaces.

Methodology

In this project, we’re setting up sensors using different protocols like Modbus TCP, Modbus RTU, and MQTT. These sensors collect data through Node-RED nodes, which streamline the process. Next, we’re deploying MariaDB on the reTerminal DM, where we create databases and tables to store the collected data securely. Then, we’re utilizing locally deployed Small Language Models (SLMs) on the reTerminal DM or with cloud support like Google Gemini or Meta Llama to generate SQL queries from natural language. These queries are then executed on the relevant database tables to retrieve answers. Finally, we’re designing a user-friendly dashboard using FlowFuse and Node-Red, which acts as the interface between the user and the system, allowing for easy data visualization and control.

Conclusion

Integrating Operational Technology (OT) and Information Technology (IT) services in factory operations or smart buildings offers significant advantages, particularly when leveraging cloud-based LLM APIs or Local SLMs within existing systems. Traditional OT systems, such as factory automation, often operate in closed networks without internet connectivity. By incorporating local  LLMs, these systems can enhance their capabilities without breaching security and privacy. Additionally, deploying quantized models on the reTerminal DM offers a cost-effective solution that extends beyond data acquisition and edge control. This setup not only manages real-time operations efficiently but also empowers decision-makers and non-technical personnel by providing easy access to advanced data insights through natural language queries. The reTerminal DM, with its ability to facilitate local deployment of LLMs, ensures that sensitive information remains within the organization’s control while enabling sophisticated data analysis and decision support, ultimately driving operational excellence and innovation.

Resources

New in the Town

About Author

1 thought on “Deploying LLMs on Raspberry Pi Powered HMI: Talk with Intelligent Facilities

Leave a Reply

Your email address will not be published. Required fields are marked *

Calendar

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031