Dyber, Inc. — Hardware Reasoning Verified

AI that shows
its work.

NXPU is a processor that combines three types of reasoning on a single chip. It tells you what it knows, what it inferred, and what it doesn't know. Load domain knowledge. Ask questions. Get auditable answers with calibrated confidence. No training required.

SYS.ARCH // NXPU-R1
ENGINES // 3
PARADIGM // NEUROSYMBOLIC
STATUS // PHASE 1 RTL VERIFIED
MODULES // 26 VERILOG
TESTS // 50+ PASS
REASONING // 3/3 PROVEN
Spiking Neural Networks Hardware Datalog Bayesian Causal Inference STDP Online Learning Zero Training Data Photonic Interconnect Post-Quantum Secure Explainable Reasoning Spiking Neural Networks Hardware Datalog Bayesian Causal Inference STDP Online Learning Zero Training Data Photonic Interconnect Post-Quantum Secure Explainable Reasoning
001

Three engines.
One orchestrator.
Zero compromise.

Intelligence is not one operation repeated at scale. It is the dynamic orchestration of qualitatively different computation types. The NXPU is the first processor built around that premise.

0.786
Causal Discovery F1
100%
One-Shot Accuracy
87%
Compositional Reasoning
4/4
Domain Generalization
26
Verilog Modules
450K
Gates (Synthesized)
3/3
Reasoning Tests Pass
50+
Hardware Tests Pass
002

Three paradigms.
Unified in silicon.

Each engine is purpose-built for a different kind of intelligence. The Orchestrator dynamically routes computation between them based on what each sub-problem actually needs.

Neural Mesh
Engine A — Perception
Neuromorphic spiking cores with event-driven, asynchronous computation. Learns from single exposures via STDP. No training runs. No datasets. Milliwatt power consumption.
  • Persistent reward-modulated STDP
  • 1024 neurons, 64 input channels
  • 100% one-shot from random connectivity
  • Zero catastrophic forgetting (EWC)
  • ~50% sparsity (event-driven)
Symbolic Logic Unit
Engine B — Reasoning
Hardware-accelerated graph traversal, unification, and theorem proving. Content-addressable memory for O(1) knowledge lookup. Reasoning chains are inherently explainable.
  • Content-addressable memory (CAM)
  • Hardware Datalog with negation-as-failure
  • 11,904 triples, 0.31ms queries
  • 7x faster than Z3 SMT solver
  • Incremental real-time updates
?
Causal Simulation
Engine C — Understanding
Probabilistic cores for Bayesian inference and counterfactual reasoning. Quantum random number generation for true Monte Carlo sampling. Learns by experimenting, not observing.
  • Hardware do-calculus (Pearl)
  • F1=0.786 on Sachs benchmark
  • Beats PC, GES, NOTEARS, DAG-GNN
  • Active intervention selection
  • 3/3 standard benchmark wins
002.5

Same threat.
Two systems.
One shows its work.

Select a network scenario. See how a typical LLM responds vs. how NXPU reasons through it step by step. Pay attention to what happens when the system encounters something it's never seen before.

Select a scenario to begin.
Typical LLM Response
NXPU Response
NM
Neural Mesh
waiting
SLU
Symbolic Logic
waiting
CSE
Causal Sim
waiting
OC
Orchestrator
waiting
003

The GPU era
is a local maximum.

Scaling transformers hit diminishing returns on reasoning. The next leap requires architectural innovation, not bigger clusters.

Current Paradigm
Trillions of tokens Requires massive pre-collected datasets
$100M training runs Thousands of GPU-hours per model
Frozen after training Knowledge becomes stale immediately
Correlation, not causation Pattern matching without understanding
Black box No explainability, no audit trail
700W per chip Unsustainable energy trajectory
OLD NEW
NXPU Paradigm
Low-data reasoning NM: single exposures. SLU: loaded knowledge. CSE: interventions.
Milliwatt inference Event-driven compute on change only
Continuous adaptation STDP learns in real-time during deployment
Causal understanding Do-calculus discovers mechanisms, not patterns
Transparent reasoning Every decision has a provable trace
Post-quantum secure PQC integrated at the silicon level
004

From simulation
to silicon.

Four phases. Each gated by measurable proof points. We do not commit capital until the data justifies it.

Phase 0 + Phase 1 RTL — Complete
Software Validation + Hardware Design
Phase 0: NXSim simulator, NXLang compiler, NXRI reasoning engine. 15+ benchmarks (Sachs F1=0.786, ARC-AGI 8.5%, 87% compositional reasoning). Phase 1: 26 Verilog modules, 4,589 lines RTL, 450K gates synthesized. Hardware cross-validated against Python. 3/3 reasoning proofs pass: concept formation at 1030-fact scale, feedback convergence, analogy transfer.
Phase 1 FPGA — Next
Silicon Validation
RTL verified in simulation. Next: deploy to Xilinx FPGA. First real hardware measurement of latency and energy. Target: validate that 26-module architecture runs on silicon. One measurement on real hardware > 1000 simulation cycles.
Phase 2 — 2027
Architecture Refinement
Bottleneck analysis. NXLang v1.0 compiler. Photonic interconnect simulation. Partner engagement with fab houses. Five demo applications across defense, medical, and robotics.
Phase 3 — 2028
First Silicon
ASIC tape-out. True analog crossbar arrays. On-die QRNG. Silicon photonics integration for Unified Memory Fabric. Post-silicon validation against NXBench.
nxpu@dyber.org