WakeTech.ai
← Back to Crane overview
Stack and infrastructure

What runs WakeTech.ai.

A technical inventory for Crane Worldwide Logistics IT leadership. Enough detail to evaluate the platform. A deeper due-diligence packet is available under NDA on request.

Cloud and infrastructure

WakeTech.ai runs on Microsoft Azure. Every enterprise customer receives a dedicated deployment consisting of their own compute, their own storage, and their own database. No shared infrastructure, no shared schema, no noisy neighbor. For customers with strict data residency or compliance requirements, deployment can land inside the customer corporate Azure tenant under their own billing and governance.

Primary hosting region is US East. Multi-region deployment is available for customers with geographic redundancy requirements. The deployment automation is infrastructure-as-code, so spinning up a new enterprise customer environment is a repeatable and audited process, not a manual click-through.

Cloud provider
Microsoft Azure
Operating system
Ubuntu Linux LTS on Azure virtual machines
Container runtime
Docker for service isolation and portability
Reverse proxy
nginx for TLS termination, routing, and rate limiting
Process supervisor
PM2 for Node.js service lifecycle and crash recovery
Deployment model
Dedicated per-customer, optional customer-tenant landing

Application platform

The core platform is written in TypeScript on Node.js, served through Next.js for the web application layer. TypeScript enforces type safety across the codebase, which materially reduces the class of bugs that reach production. Next.js gives us server-side rendering for fast initial page loads, API route handlers for integration endpoints, and a static build pipeline that deploys as a single standalone artifact.

Language
TypeScript (strict mode)
Runtime
Node.js LTS
Web framework
Next.js with React
UI framework
Tailwind CSS with a locked design system
Build output
Standalone production bundle, container-ready
Package management
npm with lockfile-enforced reproducibility

Data platform

Transactional data lives in Microsoft SQL Server on Azure. Every customer has a dedicated database instance. No shared database. No logical tenant separation inside a common schema. SQL Server was chosen because it is the industry standard for freight and logistics accounting workloads, integrates cleanly with Microsoft Azure, and has a mature backup and point-in-time recovery story that enterprise procurement teams are already familiar with.

Object storage for documents, BOL images, EDI archives, and other binary artifacts uses Azure Blob Storage with per-customer containers and server-side encryption. Retention policies and lifecycle rules are set per customer according to their compliance requirements.

Transactional database
Microsoft SQL Server on Azure, dedicated per customer
Object storage
Azure Blob Storage, per-customer containers
Backup
Automated point-in-time recovery with customer-defined retention
Data portability
Customer can extract their full dataset on request

AI and intelligence

The WakeTech.ai AI crew runs on Anthropic Claude models via the Anthropic API. Claude was selected for three reasons: strong enterprise security posture (SOC 2 Type II, no training on customer data, zero data retention available), high reasoning quality for operational decision-making tasks, and a mature tool-use API that supports the agent architecture the crew is built on.

Agents are stateful services with their own memory stores, skill catalogs, and approval thresholds. Each agent (Lane for carrier negotiation, Ada for operations oversight, Pulse for load tracking, Scout for carrier development, Ledger for AR, Intake for quoting, Sentinel for infrastructure monitoring, Apollo for appointment scheduling) is scoped to a specific operational domain and earns autonomous authority through demonstrated performance against a 92 percent approval threshold.

Corridor intelligence (WakeTech.ai Signal) is a separate analytical layer scoring US freight corridors against NOAA weather data, EIA energy signals, USDA agricultural signals, AIS vessel tracking, and a proprietary hurricane forecast model. This runs on our own infrastructure, not through a third party, and does not ship customer data to external analytics services.

LLM provider
Anthropic Claude (enterprise API)
Data retention with provider
Zero data retention available for customer-sensitive workflows
Agent framework
Proprietary, built on Anthropic tool use
Approval model
Graduated autonomy with performance thresholds and operator oversight

Routing and mapping

Truck routing is powered by a self-hosted Valhalla routing engine running against OpenStreetMap data for North America. Valhalla is a respected open-source routing project used by Mapbox, government agencies, and commercial logistics operators. Self-hosting means WakeTech.ai customers pay zero per-request fees for routing, zero per-seat licensing, and get sub-second route calculation with full truck profile support (vehicle dimensions, weight, hazmat, bridge restrictions, time-of-day access rules).

Map rendering uses PMTiles with MapLibre GL for vector tile delivery. PMTiles is a single-file archive format that serves vector tiles over HTTP range requests without a tile server dependency. MapLibre is the open-source fork of Mapbox GL and does not require a Mapbox access token or per-view billing. Maps display in the browser with zero runtime cost per view.

Routing engine
Valhalla, self-hosted, North America truck profile
Map data source
OpenStreetMap, refreshed on a regular cadence
Tile format
PMTiles served over HTTP
Map renderer
MapLibre GL, no third-party API keys required

Integration platform

EDI is handled natively by WakeEDI, our production EDI processing service. It currently handles the core freight transaction set: 204 tenders, 990 tender responses, 214 shipment status updates, and 997 functional acknowledgments. WakeEDI supports SFTP and AS2 transport, accommodates partner-specific quirks without custom development per partner, and logs every message for audit and replay.

General-purpose integration uses a combination of REST APIs exposed by core, webhook event subscriptions for async notification, and customer-specific adapters that live in the customization layer. The integration platform is how WakeTech.ai connects to customer legacy systems, customs brokers, carrier onboarding services, compliance platforms, and other components of the broader logistics stack.

EDI
WakeEDI native, SFTP and AS2 transports
EDI transaction sets
204, 990, 214, 997, with others added as needed
API surface
REST with versioned contracts
Async integration
Webhook subscriptions, signed payloads

Email and messaging

WakeTech.ai operates its own self-hosted JMAP and IMAP mail infrastructure for agent mailboxes, dispatch communication, and transactional customer email. Running mail on our own infrastructure keeps sensitive freight communication out of third-party mail providers, avoids per-mailbox licensing costs, and gives each customer the option of a dedicated mail domain for their deployment.

Outbound notification supports SMS and voice through an A2P 10DLC compliant messaging pipeline for driver and dispatch alerts. Voice agents for live carrier negotiation use enterprise-grade real-time voice synthesis.

Mail server
Self-hosted JMAP and IMAP, per-customer domain option
Transactional email
Native SMTP with DKIM and DMARC alignment
SMS
A2P 10DLC compliant pipeline
Voice
Enterprise real-time voice synthesis for agent workflows

Observability and operations

Every deployment runs its own monitoring, logging, and alerting stack. Infrastructure metrics, application logs, and business event streams are collected per customer and retained according to customer policy. The WakeTech.ai Sentinel agent provides automated infrastructure health monitoring and can escalate incidents before they become customer-visible outages. Customers have direct read access to their own operational telemetry.

Monitoring
Per-deployment health, performance, and business metrics
Logging
Structured application and access logs, customer-scoped retention
Alerting
Automated through Sentinel agent with operator escalation
Backups
Automated with customer-defined retention windows

Security posture

Isolation first. Each customer deployment is a separate network perimeter. There is no cross-tenant query path at the infrastructure level, only at the application level, which itself is scoped to a single customer database. A security incident in one customer deployment cannot propagate to another.

Encryption everywhere. TLS 1.2 or higher for all traffic in transit. Encryption at rest for every database and every storage container. Customer-managed keys available on request for deployments with strict key control requirements.

Least privilege. Application service accounts have only the database permissions they need. Schema changes require elevated credentials held by a separate operations account. Secrets are stored in Azure Key Vault or environment-scoped secret stores, never hardcoded.

Audit trail. Every material action in the platform is logged with actor, timestamp, and affected entity. Audit logs are append-only from the application perspective and tamper-evident through infrastructure controls.

Responsible disclosure. A due-diligence packet covering vulnerability response policy, penetration test history, dependency management, and supply chain controls is available under NDA on request.

What we do not use, and why

Enterprise IT teams evaluating WakeTech.ai often ask about things we explicitly do not use. A few deliberate non-choices:

For Crane IT

This page is the public-facing summary. Procurement security reviews typically need more detail: exact version inventories, CVE response policy, penetration test reports, SOC 2 posture, dependency tree, deployment runbooks, and disaster recovery documentation. A due-diligence packet covering those topics is available under NDA.

Questions about any component, integration, or policy on this page go to aaron@waketech.ai and get answered by the person who built it.