Skip to main content

Data Engine

Introduction

The Data Engine is a solution that is responsible for the ingestion, processing, storing and visualizing data. It can be regarded to as an Industrial Metaverse solution in the ecosystem that builds a Digital representation of the physical assets of a customer (Digital Twin). The Data Engine is responsible for the following:

  • ... @TODO

Way of Working

As the data engine development team we have the following way of working.

  1. Infrastructure: We deploy the infrastructure independently of the applications. This is deployed using the tool pulumi and provision the entire cloud infrastructure required. A CI/CD pipeline is in place to update the infrastructure when changes happen.
  2. Applications: For each application an Azure Application Service and a Container Registry Entry is provisioned. The application is deployed to the App Service using a webhook. The webhook is triggered when a new version of the application is pushed to the container registry. A CI/CD pipeline is in place to build and push the application to the container registry.

Structure

The Data Engine code is situated in the solutions/data-engine folder and is organized as a monorepo through TurboRepo. It contains the required files and folders as shown below.

Each of the respective folders are documented in the sidebar under their respective names.

.
├── README.md
├── apps # All the applications related to the solution
│ ├── backoffice-backend # - [NestJS] App - Backoffice: Backend
│ ├── backoffice-frontend # - [NextJS] App - Backoffice: Frontend
│ ├── signaling-http-socket-io # - [NodeJS] Signaling service for realtime websocket communication
│ ├── twin-event-processor # - [NodeJS] Sink msgs to the Event Store, Signaling and Actions
│ └── twin-mapper # - [NodeJS] Mapper to map raw device messages on the Digital Twin
├── infrastructure # Infrastructure related code for Pulumi
│ ├── iot-platform # - The core platform
│ ├── iot-platform-location-analytics # - The location analytics platform on top of the Core platform
│ └── protocol-converters # - The protocol converters
├── demos
│ └── drone-yolo7 # Demonstration for Drone Analytics
├── package.json
├── packages # Common packages
│ ├── database # - [Prisma] Database abstraction layer
│ ├── eslint-config-custom # - Linting configuration
│ └── tsconfig # - Typescript configuration
├── pnpm-workspace.yaml # Defines the directories to take into account for turbo
├── scripts # Helper scripts
│ ├── start-docker.sh # - Starting Docker
│ └── start-postgres.sh # - Starting PostgreSQL
└── turbo.json # Pipelines defining how to start each project

Getting Started

Everything is executed from the solutions/data-engine folder

To get started, simply run the below:

# Navigate to the monorepo
cd solutions/data-engine

# Install the dependencies
pnpm i

All the dependencies are now installed and the project is ready to go.

Running

Backoffice

The Backoffice is responsible for customer interactions. It will provide a user interface to the user to interact with the Data Engine solution.

# Seed the Database
pnpm run db:init

# Run the Backoffice
pnpm run dev:backoffice

The above will spin up the backoffice and make the following endpoints available:

Microservice

Test

Deploying