Rethink Compute Fabric
AI compute needs
orders of magnitude
better energy efficiency.
The current trajectory of AI compute is unsustainable. We are rebuilding the compute substrate — from novel materials to the full AI stack — to make it possible to run 100B-parameter models at 100,000 tokens/s on a single chip.
We're hiring →
945 TWh
Projected global data centre electricity consumption by 2030
IEA Energy & AI Report, 2025 — more than Japan's entire grid today
5×
Growth in AI-optimised server power use by 2030
Gartner, 2025 — from 93 TWh to 432 TWh in five years
~1000×
Energy reduction required for AI at civilisational scale
If AI is to be abundant, the compute fabric must be rethought entirely
80–90%
Of all AI compute is inference, not training
The efficiency problem is a deployment problem — and it lives at the silicon level
What we're building
01
Novel compute fabric
We work with unconventional materials and compute-in-memory architectures to collapse the gap between memory and compute — where the real energy is wasted. Our target: 100B-parameter inference on a single chip at 100,000 tokens per second.
02
High-density analog architecture
Dense, analog, in-memory computation eliminates the data movement bottleneck that plagues today's digital accelerators. We take inspiration from how biological systems compute — efficiently, in-place, without redundant movement of data.
03
AI-first design tooling
Designing novel silicon requires rethinking the EDA stack itself. We build our own design and verification tools — purpose-built for the architectures we explore, rather than forcing new ideas through tools designed for yesterday's chips.
04
Full-stack co-design
From device physics to model compilation, every layer is co-designed. Hardware constraints inform model architecture; model requirements drive circuit decisions. We don't optimise individual layers — we optimise the system.
About us
Works of art make rules;
rules do not make works of art.
We are a small, focused team with deep experience across AI model optimization, hardware design, signal processing, analog circuits, and compute-in-memory architectures. Between us, we have helped build multiple successful startups and shipped deep technology across domains.
We work on problems where the right answer requires understanding the physics, the mathematics, and the system together — not just the software layer on top. We are targeting orders of magnitude improvement in AI energy efficiency, because that is what it will take.
We operate with high agency and a bias toward fundamentals. If you want to work on problems that require genuine creativity, beyond standard engineering approaches, we'd like to hear from you.
Work on problems
that actually matter.
We are a small team of long-term builders. Every person here owns something real, works closely with the rest of the team, and has a direct impact on the core technical problems. We're looking for people who are drawn to hard problems because they're hard — not despite it.
We are designing compute architectures that treat analog computation as a feature, not a limitation. The circuits you design here are not peripheral — they are the core compute fabric. You will work at the intersection of device physics, circuit design, and machine learning, helping define what the next generation of AI hardware looks like at the transistor level.
What you'll do
- Design and simulate analog circuits for in-memory compute — including precision current/voltage-domain inference structures, novel memory-compute cells, and supporting signal path elements
- Work directly with the architecture team to translate system-level requirements into circuit specifications and back again
- Characterise and mitigate the effects of process, voltage, temperature, and long-term drift (PVT-D) in novel materials and non-standard process nodes
- Contribute to our custom EDA tooling, including Verilog-A models, characterisation flows, and validation infrastructure
- Collaborate across the stack — your design decisions have direct consequences for compiler targets, model quantisation, and system-level energy budgets
What we're looking for
- Strong foundation in analog circuit design — transistor-level understanding of noise, matching, bandwidth, and power trade-offs
- Hands-on experience with custom circuit design in standard CMOS processes; familiarity with non-standard or emerging process nodes is a plus
- Comfort with simulation environments (Spectre, HSPICE, or equivalent); prior experience building compact models or behavioural models in Verilog-A or similar is valued
- Experience with mixed-signal design or analog front-ends for sensing and signal processing
- Genuine curiosity about machine learning and how model architectures interact with hardware constraints — you don't need to be an ML researcher, but you should want to understand the connection
- Prior work at a deep-tech startup, research lab, or equivalent environment where ownership and ambiguity are the norm
Nice to have
- Background in neuromorphic circuits, compute-in-memory, or resistive memory (RRAM, memristors, PCM)
- Experience with layout, parasitic extraction, and silicon bring-up
Commercial EDA tools were not built for the architectures we are designing. We are building our own — and we want engineers who think designing tooling is as interesting as the hardware it serves. You will work on software that bridges the gap between novel circuit structures, model representations, and physical design constraints, using systems languages chosen for performance and correctness.
What you'll do
- Design and build EDA tooling infrastructure for analog and mixed-signal circuit design — including netlist parsers, design rule engines, characterisation automation, and verification flows
- Work in Zig, Rust, and/or C on performance-critical tool components; contribute to a codebase where correctness guarantees are a design goal, not an afterthought
- Build AI-assisted design tools: design-space exploration, automated schematic generation, constraint propagation, and layout assist
- Integrate with standard data formats (SPICE, Verilog-A, GDSII, LEF/DEF) and open-source EDA ecosystems
- Work closely with analog designers and compiler engineers to understand requirements and build tools that actually reflect how designers think
What we're looking for
- Strong systems programming skills in at least one of: C, Zig, or Rust — comfort with low-level memory management, performance profiling, and building reliable tooling
- Experience building developer tools, compilers, or structured data pipelines; you understand how to design for both correctness and usability
- Familiarity with EDA concepts — even at a high level — is strongly valued; prior work with SPICE, OpenROAD, KLayout, Magic, or similar tools is a plus
- Exposure to ML-assisted design, constraint solving, or formal methods is a plus
- A mindset that treats tooling as a first-class technical problem, not scaffolding — the tools we build have direct impact on what hardware is possible
Nice to have
- Prior contributions to open-source EDA tools or hardware description languages
- Experience with language design, IR design, or domain-specific languages (DSLs)
Getting a 100B-parameter model to run on a fundamentally different compute architecture is not just a software problem — it requires rethinking how models are represented, partitioned, and lowered to hardware. You will build the compiler stack that bridges PyTorch and TensorFlow model graphs to our custom hardware ISA, working across the full pipeline from graph-level optimisation to low-level instruction scheduling.
What you'll do
- Design and implement compiler passes for graph-level optimisation: operator fusion, tiling, quantisation-aware lowering, and memory layout transformations
- Work on the backend: map model operations to our custom hardware ISA, handling resource allocation, scheduling, and memory management for a spatial, in-memory compute architecture
- Build and maintain compiler infrastructure using MLIR and/or LLVM; contribute to dialect design and lowering pipelines for novel compute primitives
- Develop model partitioning strategies (tensor, pipeline, and data parallelism) tuned to our hardware's specific topology and memory hierarchy
- Profile and benchmark end-to-end model execution; own the feedback loop between compiler decisions and measured hardware performance
What we're looking for
- Experience building production compiler infrastructure — LLVM, MLIR, TVM, XLA, or equivalent; familiarity with dialect design, IR transformations, and lowering pipelines
- Strong algorithmic problem-solving: you can move from a high-level problem statement to a correct, efficient implementation
- Solid working knowledge of ML frameworks (PyTorch, TensorFlow, ONNX) and how models are represented and executed at the graph level
- Comfort with C++ and Python; experience with systems languages (Rust, Zig) is valued
- Prior work mapping models to novel or non-GPU hardware — ASICs, FPGAs, or custom accelerators — is a strong plus
Nice to have
- Contributions to open-source compiler projects (Torch-MLIR, ONNX-MLIR, Triton, TVM)
- Understanding of analog hardware constraints and how they affect numerical precision, quantisation strategies, and operator support
✦
Don't see your role listed? If you have deep expertise in algorithms, physics or mathematics, or hardware-software co-design — write to us at careers@abascaler.com with a short note on what you've built and what you want to work on.
Privacy Policy
Last updated: 20 February 2026
The short version
This website does not collect, store, or process any personal data. We do not use cookies, analytics, tracking pixels, or any third-party services that collect information about you.
What we collect
Nothing. This is a static, informational website. There are no forms, accounts, logins, or interactive features that capture data. We do not use cookies or local storage.
Third-party services
This site loads fonts from Google Fonts. Google may log standard server requests (such as your IP address) when serving font files. We have no access to or control over that data. No other third-party services are used.
Email
If you contact us via the email addresses listed on this site, we will receive your email address and message content. We use that information only to respond to you and do not add you to any mailing list.
Changes
If this policy changes, we will update the date above. Given how little data we handle, material changes are unlikely.
Contact
Questions about this policy can be sent to hello@abascaler.com.
Terms of Use
Last updated: 20 February 2026
What this site is
This website is operated by Abascaler. It provides general information about our company and open roles. Nothing on this site constitutes a contractual offer, investment advice, or professional recommendation of any kind.
Use of content
All text, graphics, and logos on this site are the property of Abascaler unless otherwise noted. You may view and reference the content for personal, non-commercial purposes. You may not reproduce, distribute, or create derivative works from the content without our written permission.
No warranties
This site and its content are provided "as is" without warranties of any kind. While we make a reasonable effort to keep information current, we do not guarantee accuracy or completeness.
Limitation of liability
To the fullest extent permitted by law, Abascaler is not liable for any damages arising from your use of or inability to use this website.
External links
This site may occasionally link to external resources. We are not responsible for the content or practices of any third-party sites.
Changes
We may update these terms from time to time. The date at the top of this page reflects the most recent revision.
Contact
Questions about these terms can be sent to hello@abascaler.com.