87% of German manufacturing companies cite latency and network availability as the biggest obstacle to IoT projects on the production floor — according to a Bitkom study from 2024. Edge computing is the answer: processing data where it is generated. AWS offers two complementary approaches: AWS IoT Greengrass V2 as a lightweight edge runtime running directly on industrial hardware, and AWS Outposts as a fully managed AWS infrastructure rack inside your own factory. This article explains both architectures, provides a decision matrix and outlines the migration path from Greengrass V1 to V2 before the end-of-support deadline in June 2026.

Why Edge Computing Is Becoming Essential in Manufacturing

The connected factory generates data volumes that cloud-only architectures cannot handle alone. A modern CNC machine produces up to 2,000 measurement values per second. A high-speed quality inspection camera delivers 120 frames per second at 4K resolution. These are data streams that cannot — and often should not — be transmitted entirely to the cloud, either for economic or technical reasons.

Three drivers make edge computing a strategic necessity in production:

  1. Latency requirements: Safety PLCs require response times below 10 ms. A cloud round-trip takes at least 20–100 ms — too slow for real-time control and inline visual quality inspection.
  2. Network independence: Production lines must continue operating when the WAN connection fails. Edge devices with local processing maintain operations even if the cloud link is interrupted.
  3. Data residency and compliance: Production data, machine signatures and occasionally personal data (shift schedules, operator identification) must not leave Germany under GDPR and operational compliance requirements. Local processing is therefore not only sensible in many cases — it is mandatory.

There is also a straightforward economic argument: cloud bandwidth and compute for raw data are expensive. Edge computing filters, aggregates and compresses data before transmission — only relevant events and summaries reach the cloud.

Key Technology Definitions

Edge Computing
Processing data at or close to the source — on the machine, at a gateway in the production hall or in the shopfloor data center — rather than centrally in the cloud. Reduces latency, saves bandwidth and enables offline-capable applications.
AWS IoT Greengrass V2
An open-source edge runtime from AWS (Apache 2.0) that runs on Linux-based industrial hardware. It brings AWS services such as Lambda, ML inference, local messaging (MQTT) and secrets management directly onto the device. Greengrass V2 uses a component-based deployment model — each function is a standalone, versioned component managed through AWS IoT Core.
Greengrass Component
The fundamental deployment unit in Greengrass V2. A component contains code (Lambda, container, or native process), a recipe file (YAML/JSON with dependencies, lifecycle hooks and parameter schema) and optional artifacts (model files, configuration). Components are managed in the AWS IoT Greengrass catalog and can be deployed to device fleets.
AWS Outposts
A fully configured AWS infrastructure rack (42U or smaller form factors), delivered, installed and operated by AWS at the customer's data center or factory. Outposts provide the same EC2 instance types, EBS, RDS, ECS, EKS and other services as an AWS region — with native integration and a unified API. Management is handled through the AWS Console the same way as regular AWS services.
Amazon SageMaker Neo
A compilation and optimization service for ML models. Neo compiles models from TensorFlow, PyTorch, MXNet, Keras, ONNX and XGBoost for specific target hardware: NVIDIA Jetson, Intel Atom, ARM Cortex-M, Raspberry Pi, x86-64. The result is a hardware-optimized model that requires less memory and inferences faster — using CPU or NPU only, without a dedicated GPU.
Outpost Rack
The standard form-factor variant of AWS Outposts: a 42U rack with the AWS Nitro System, networking hardware and pre-installed AWS services. AWS handles delivery, installation, operations and updates. For smaller deployments, AWS Outposts Server (1U/2U) offers a reduced service portfolio.

AWS IoT Greengrass V2: Architecture and How It Works

Greengrass V2 transforms any Linux device (x86-64, ARM, NVIDIA Jetson) into a fully functional AWS edge node. The core is the Nucleus — a systemd service (or init process) that runs on the device and manages communication with AWS IoT Core.

The architecture follows a hub-and-spoke model:

  1. Greengrass Core Device: The gateway device in the factory — typically an industrial PC or dedicated IoT gateway. It runs the Nucleus, hosts components and communicates upward with AWS IoT Core via MQTT/HTTPS and downward with connected leaf devices.
  2. Leaf Devices: Sensors, cameras, PLCs that communicate via MQTT with the core device. They do not require Greengrass themselves — only MQTT connectivity.
  3. Component Deployment: Deployment documents created via AWS IoT Core or the CLI define which components in which version are deployed to which device groups. Greengrass downloads artifacts from S3, verifies checksums and starts components according to their lifecycle definitions.
  4. Local IPC: Components communicate with each other via a local message bus (AWS IoT Greengrass IPC) — without routing through the cloud. This enables deterministic latency for component-to-component communication.

Three Greengrass capabilities are particularly relevant for manufacturing: Stream Manager for buffering and ordered transmission of time-series data even with unstable connectivity, Secrets Manager integration for secure credential management without hardcoding, and Docker Container support for running containerized applications at the edge.

AWS Outposts in the Factory: Full Cloud Infrastructure On-Premises

AWS Outposts addresses a different use case than Greengrass: workloads that need real server resources, databases, container orchestration or specific AWS services, but cannot run in the region for compliance or latency reasons.

Typical Outposts workloads in manufacturing:

  • MES (Manufacturing Execution System) on EC2 with RDS — full SQL database on-premises, with AWS-native backup and monitoring integration
  • ECS/EKS clusters for manufacturing applications with short latency requirements to the shopfloor
  • Image processing workloads with GPU instances (G4dn on Outposts) that are too compute-intensive for edge devices but too latency-sensitive for the cloud region
  • SCADA systems and historian databases with regulatory data residency requirements

AWS is responsible for physical operations: hardware maintenance, firmware updates and capacity monitoring. The customer is responsible for network connectivity (dedicated link to the AWS home region), physical security and cooling. The service link — the connection between the Outpost and the AWS home region — is required for management-plane operations, but the data plane continues to function even if the service link fails.

Decision Matrix: Greengrass V2 vs. AWS Outposts

Choosing between Greengrass and Outposts is not an either-or decision — many mature manufacturing architectures use both. The following matrix provides initial guidance:

Criterion AWS IoT Greengrass V2 AWS Outposts
Hardware Existing industrial hardware, IPCs, gateways (Linux, ARM, x86) AWS-owned rack (42U) or server (1U/2U) — delivered by AWS
Investment Low — software on existing hardware; license per device count High — rack rental from ~$250k/year plus EC2 instance costs
Latency <1 ms locally (IPC); <10 ms for ML inference on device 1–5 ms in the factory (Ethernet); full EC2 performance
Available AWS Services Lambda, ML inference, IoT Core Shadow, Stream Manager, Secrets Manager EC2, EBS, RDS, ECS, EKS, S3 on Outposts, ElastiCache, EMR
Typical Use Cases Sensor data aggregation, ML inference, protocol translation, offline operation MES, SCADA, databases, container workloads, GPU inference
Scalability Thousands of devices via fleet provisioning and deployment groups Vertical (instance type); multiple racks per site possible
Offline Capability Full — components continue running without cloud connectivity Partial — data plane continues; management plane requires service link
Operating Model Customer operates hardware; AWS manages Greengrass software AWS operates hardware; customer responsible for network and physical security
Data Residency Fully local possible (local processing, no cloud upload required) Fully local — physically in your own facility
Recommendation IoT gateway, machine connectivity, distributed sensor fleets Database-intensive, compliance-critical, full cloud API compatibility required

ML Inference at the Edge: SageMaker Neo and Greengrass

The most powerful application scenario for Greengrass in manufacturing is running ML models directly on the machine — without a cloud round-trip. Typical use cases include:

  • Visual quality control: Classification of production defects (scratches, cracks, misaligned parts) on an edge device with camera connectivity — decision in under 50 ms, before the product leaves the inspection station
  • Anomaly detection on time-series data: Vibration and temperature patterns from a CNC machine are analyzed locally by an LSTM or autoencoder model; alerts are triggered locally
  • Predictive quality: Inline prediction of final product quality based on process parameters — tool wear, spindle load, feed rate — while machining is still in progress

The workflow with SageMaker Neo and Greengrass follows four steps:

  1. Train the model: In SageMaker Studio or externally (Jupyter, local training). Export formats: SavedModel (TF), TorchScript (PyTorch), ONNX or XGBoost pickle.
  2. Compile the model: In SageMaker Neo, specify the target device (e.g. "jetson_nano", "ml_c5" for x86 CPU). Neo optimizes operator fusion, quantization and kernel selection. Output: a compressed model archive (.tar.gz) in S3.
  3. Package as a Greengrass component: The AWS-managed component aws.greengrass.SageMakerEdgeManager handles model download, versioning and lifecycle. Alternatively: a custom component using DLR (Deep Learning Runtime) for direct model loading.
  4. Deploy: A deployment document is applied to the target device group via AWS IoT Core. Greengrass downloads artifacts, starts the inference component and exposes a local gRPC endpoint for other components.

Amazon Kinesis Video Streams (KVS) complements this scenario: edge cameras stream via the KVS Edge Agent to the Greengrass device locally. ML inference runs there. Only annotated clips (defects found, anomalies) are sent to the cloud — bandwidth savings of up to 95% compared to full video upload.

Greengrass V1 → V2 Migration: June 2026 Deadline

AWS has announced that AWS IoT Greengrass V1 will reach end-of-support on June 30, 2026. After that date, no more security patches, bug fixes or new features will be released for V1. Companies running Greengrass V1 in production must have migrated by then.

Greengrass V2 is not an incremental update — it is an architectural redesign. The key differences:

Feature Greengrass V1 Greengrass V2
Deployment unit Group (monolithic) Component (granular, versioned)
Code model AWS Lambda functions Lambda, container, native process — selectable per component
Provisioning Manual or custom Fleet provisioning, just-in-time provisioning, token exchange
Nucleus Proprietary Open-source (Apache 2.0)
Local communication MQTT broker (Mosquitto) IPC (faster, authenticated); MQTT bridge optional
ML inference ML Inference Component (deprecated) SageMaker Edge Manager, DLR, native Neo integration
Docker support Limited Full (Docker Application Component)

Recommended migration process for manufacturing operations:

  1. Take inventory (weeks 1–2): Identify all Greengrass V1 core devices, catalog Lambda functions, document dependencies and connectors.
  2. Select a pilot device (weeks 3–4): Choose a non-critical device in a lab environment as the first migration target. V1 and V2 cannot coexist on the same device — full replacement is required.
  3. Develop components (weeks 5–10): Initially wrap V1 Lambda functions as legacy Lambda components in V2 (fast, but no new V2 features). In parallel: rebuild critical workloads as native V2 components.
  4. Roll out in waves (from week 11): Use deployment groups in AWS IoT Core to migrate devices in batches. Monitor via CloudWatch Device Advisor. Prepare a rollback mechanism for each deployment document.
  5. Validate and go live: Run V1 (on other devices) and V2 in parallel for at least four weeks before decommissioning V1 instances.

Storm Reply Perspective: Edge Strategies for German Manufacturers

Storm Reply helps German manufacturing companies implement AWS edge architectures — from strategy through to pilot plant. Our experience from projects over the past three years shows three recurring patterns:

Pattern 1 — Greengrass as an integration gateway: Older machines (pre-2015) with proprietary protocols (PROFINET, OPC-UA, Modbus) are connected via a Greengrass core device with an OPC-UA bridge component. The industrial data lake in S3 receives normalized time-series data regardless of the manufacturer's protocol. ROI is typically achieved within 12 months through consolidated condition monitoring.

Pattern 2 — Edge AI for visual quality control: Inline cameras with Greengrass and Neo-compiled models replace manual visual inspection. Implementation time: 8–12 weeks for a pilot, 4–6 months for full rollout. A reduction in defect rates of 30–60% compared to manual inspection is achievable.

Pattern 3 — Outposts for MES modernization: Mid-sized manufacturers with existing MES systems on aging on-premises hardware migrate to Outposts. They retain data residency and low latency for shopfloor connectivity while gaining AWS-native services (CloudWatch, Secrets Manager, IAM) and a clear cloud migration exit strategy.

Storm Reply is an AWS Premier Consulting Partner specializing in Industrial IoT and Manufacturing. Our migration assessment for Greengrass V1 → V2 takes two weeks and delivers a complete migration plan with risk assessment and effort estimates.

Regulatory Framework: NIS2, Cyber Resilience Act and GDPR

Edge computing in manufacturing operates at the intersection of three regulatory requirements that have sharpened in 2024 and 2025:

NIS2 (Network and Information Security Directive 2): The NIS2 directive, transposed into German law as of October 2024, requires operators of critical infrastructure and "important entities" (revenue >€50m, 250+ employees or lower thresholds for critical sectors) to include OT networks in their information security management. Greengrass devices in the production network are OT assets and must be incorporated into asset inventory, patch management and incident response processes. AWS IoT Device Defender provides security monitoring for Greengrass devices and integrates with SIEM systems.

Cyber Resilience Act (CRA): The CRA (in force since 2024, transition period until 2027) sets requirements for manufacturers of connected products — security by design, vulnerability disclosure and patch obligations throughout the product lifecycle. For manufacturing companies developing and deploying their own Greengrass components, this creates a duty of care: SBOM (Software Bill of Materials), dependency tracking and update processes must be documented. AWS provides Inspector and CodeArtifact for SBOM generation.

GDPR and Data Residency: Production data is generally not personal data — but exceptions exist: operator identification at machines, shift schedules, access credentials. This data must not leave Germany. Greengrass local processing and Outposts both guarantee physical data residency. Using AWS region eu-central-1 (Frankfurt) for cloud components also satisfies GDPR requirements for cross-border data transfers at the technical level.

Benefits and Challenges at a Glance

Aspect Benefit / Opportunity Challenge / Risk
Latency Sub-10-ms inference directly on the machine; no WAN dependency for real-time processes Hardware sizing requires careful model benchmarking before purchase
Operational continuity Offline operation during WAN outages; Greengrass buffers data locally Local hardware requires maintenance and physical access in emergencies
Scalability Fleet provisioning enables zero-touch deployment to thousands of devices Heterogeneous device fleets (different CPU architectures) increase complexity
ML at the edge SageMaker Neo optimizes models for specific hardware; no cloud inference costs Model drift requires regular retraining and deployment processes
Security IoT Device Defender, X.509 certificates, least-privilege IAM policies OT/IT network segmentation in legacy facilities is often complex
Cost Bandwidth savings through edge filtering; no Greengrass software licensing costs Outposts investment is significant; TCO calculation required

Frequently Asked Questions

What is the difference between AWS IoT Greengrass and AWS Outposts?
AWS IoT Greengrass V2 is a lightweight edge runtime that runs on existing industrial hardware and brings AWS services such as Lambda, ML inference and local messaging to the machine. AWS Outposts is a fully managed AWS infrastructure rack installed in your own data center or factory — offering the same EC2 instance types, RDS, ECS and EKS as in the cloud. Greengrass is suited for low-latency control tasks at the machine; Outposts for complex workloads requiring real server resources and full AWS API compatibility.
What changes when migrating from Greengrass V1 to V2?
Greengrass V1 reaches end-of-support on June 30, 2026. Greengrass V2 introduces a component-based deployment model, improved fleet provisioning, a new open-source Nucleus core and native AWS IoT Jobs integration. V1 Lambda functions can initially run as legacy Lambda components in V2, but full reimplementation as native V2 components is recommended for new projects to take advantage of lifecycle hooks and dependency management.
How does ML inference with SageMaker Neo work at the edge?
Amazon SageMaker Neo compiles trained ML models for the target hardware — NVIDIA Jetson, Raspberry Pi, ARM Cortex, x86 — optimizing memory footprint and inference latency. The compiled model is packaged as a Greengrass component and deployed to any number of edge devices. Inference latency under 10 ms for image classification and anomaly detection is achievable even on low-power industrial hardware without a dedicated GPU.
Which regulations apply to edge data in manufacturing?
NIS2 (transposed into German law in October 2024) requires operators of critical infrastructure and important entities to include OT networks in their security management framework. The Cyber Resilience Act sets requirements for connected products including Greengrass devices. GDPR requires data residency: production and personal data processed locally at the machine must not leave Germany or the EU. Both AWS Outposts and Greengrass local processing enable full data processing without cloud transfer.

Sources and Further Reading

  • Bitkom: IoT in Industry — Drivers and Barriers 2024, Berlin 2024
  • AWS: AWS IoT Greengrass V2 Developer Guide, docs.aws.amazon.com/greengrass/v2/developerguide/
  • AWS: AWS Outposts User Guide, docs.aws.amazon.com/outposts/latest/userguide/
  • AWS: Amazon SageMaker Neo — Compile and Deploy Models, docs.aws.amazon.com/sagemaker/latest/dg/neo.html
  • Federal Office for Information Security (BSI): NIS-2-Umsetzungsgesetz, bsi.bund.de
  • European Commission: Cyber Resilience Act (EU) 2024/2847

Ready to develop your edge strategy for manufacturing?

Storm Reply guides you from Greengrass V1 migration to a full edge AI architecture. Speak with our Industrial IoT and AWS experts today.

Get in touch

More Insights