FirstlightAssess

Technical due diligence on any codebase.

Runs against the repo — the code never leaves. Seven evidenced artefacts, from architecture to AI claims to security, the way an acquirer's diligence team does it.

The Firstlight family

Three modules, to meet you where you are

Assess · Operate · Exit — one brand, one rubric, one tier table, across the deal lifecycle.

Assess

Buyside technology & AI due diligence — eight evidenced artefacts on any codebase, the way an acquirer's diligence team does it.

Available now

Operate

Portfolio-company health, tracked continuously — the same evidence trail as Assess, watching instead of snapshotting.

Coming soon

Exit

Exit-readiness — run the acquirer's diligence on yourself before the room, gaps surfaced and costed and sequenced.

Coming soon
Deliverables

Eight artefacts, every assessment

Each run produces eight deliverables — evidenced to the line, costed in USD, and exportable. The same artefact set a white-glove diligence team would hand you, automated.

Technical findings — cited to the line

Every finding anchored to a file:line in your repo, scored on the same red-flag taxonomy a white-glove diligence team uses — deal-killers first, then material, then notes. A dashboard of counts by severity and dimension up top.

file:line evidence on every finding — no hand-waving

Executive summary

Five points, written last, reads standalone — the verdict up front, with the AI-claim headline (real model or wrapper) usually leading.

One page a partner can act on

Remediation workbook

Sequenced, costed in USD, owner-by-role — Conditions to close / Phase 1 / Phase 2 / Phase 3 — and it flags what's already in the team's hiring plan.

Costed and phased, not a wish list

AI-agent fix scripts

Machine-runnable fixes for the deterministic items — rotate a leaked .env out of git history, pin a CVE'd dependency, add a dependency-scanning CI gate.

Hand the fixes straight to a coding agent

Code-quality report

The maintainability roll-up, the structural findings, and the documentation-coverage map — where the codebase is solid and where it's load-bearing and untested.

Architecture & code health in one read

Audit trail

Every run logged — repo hash, tokens, cost, start and finish — and nothing customer-identifying. On hosted runs the repo identifier is a hash, never your source.

A defensible record of every assessment

Compliance crosswalk

Every finding mapped to SOC 2 Trust Services Criteria and ISO/IEC 27001 Annex A — and to your own diligence checklist. Ready to drop into a security questionnaire or audit prep.

The auditor's view, prefilled

JSON export

The full structured result — every finding, the dimension scores, framework mappings, compliance tags — schema-validated, for your own tooling and data room.

Pipe it straight into your diligence stack

Sample report

What a report looks like

The head of a technical-findings artefact — every line cited to the repo. Sample data; not a real customer.

Assessment complete
acme-fintech / payments-api
4m 32s · 7 dimensions · 8 framework families referenced · 23 findings · 284 evidence items cited · DD rubric v1
2 deal-killers · 5 high · 9 medium · 7 low
Deal-killer·DET-SEC-1·Security·OWASP ASVS V2.10
Live database credentials committed to git history config/settings.py:14
High·LLM-AI-3·AI claims·NIST AI RMF 1.0
“AI engine” is one gpt-4o-mini call behind a two-second artificial delay services/assistant.py:88
Medium·DET-ARCH-2·Multi-tenant isolation·ISO/IEC 25010 (SQuaRE)
Tenant ID is read from a client-supplied header with no server-side check middleware/tenant.py:31
Medium·OPS-RUN-4·Operational governance·Google SRE
No rollback runbook documented for production deploys ops/runbooks/
+ 19 more findings — across 7 dimensions and 8 framework families, keyed to SOC 2 & ISO 27001. Plus a remediation workbook, AI-agent fix scripts, a code-quality report, an audit trail, and the JSON export.
Sample data — not a real customer.

Who it's for

For founders, investors, and acquirers

The same evidenced assessment — read three ways.

For founders

Walk into the raise knowing what they'll find.

  • Find the deal-killers before the data room opens
  • See your codebase the way a buyer's diligence team will
  • A costed, phased remediation plan you can actually work

For investors

Technical DD without booking the white-glove team.

  • A first-pass technical read in minutes — before you've booked a call
  • Every portfolio company's scores in one view (Meridian / Apex)
  • A real-vs-wrapper AI verdict — a genuine model, or a gpt-4o-mini call behind a spinner?

For acquirers

Deal-grade evidence, ready for the diligence file.

  • Every finding cited to a line — keyed to SOC 2 TSC and ISO 27001 Annex A
  • A post-close remediation plan, sequenced and owner-by-role
  • Multi-tenant isolation, security posture, ops governance — the parts that bite after close

How it works

Three steps to an evidenced report

Point Firstlight at a repo. It runs the deterministic checks and the LLM dimension analysers against the shared DD rubric. You get eight artefacts — every finding cited to a line of code.

1

Connect a repo

A public repo URL — or a short-lived, fine-grained, read-only GitHub token scoped to one repo. On local mode your source never leaves your environment.

2

Run the analysis

Deterministic checks — committed secrets, tenant-header trust, bus-factor comments — plus the LLM dimension analysers across all seven dimensions, scored on the rubric.

3

Get the evidence

Eight artefacts — exec summary, findings with file:line, remediation workbook, AI-agent fix scripts, code-quality report, audit trail, compliance crosswalk, JSON export. Yours in minutes.

Frameworks

Scored against eight framework families

Every finding mapped to a recognised standard — including Google's own SRE and DORA, evaluated the way Google would. And every finding is additionally tagged to SOC 2 Trust Services Criteria and ISO/IEC 27001 Annex A, keyed to your diligence checklist.

Google SRE + DORA

Reliability and delivery performance — error budgets, the four DORA metrics, runbook discipline.

ISO/IEC 25010 (SQuaRE)

Software product quality — maintainability, reliability, security, portability, performance.

OWASP ASVS + SAMM

Application-security verification and maturity — auth, sessions, access control, data handling.

NIST SSDF (SP 800-218)

Secure software development — the pipeline practices a buyer's security team expects.

NIST AI RMF 1.0

AI risk management — a governed model, or a wrapper, and documented like one?

Google SAIF

Secure-AI framework — prompt injection, data poisoning, model and output handling.

OWASP Top 10 for LLMs

The LLM-app attack surface — injection, insecure output handling, training-data and supply-chain risk.

Diátaxis + arc42 / C4

Documentation and architecture maturity — is the system explained the way an acquirer needs?

Pricing

Start free — then scale with the deal flow

One free assessment every month. Then Daylight at US$249, Meridian at US$999, or Apex from US$2,499 — all in USD, with AUD shown alongside. No per-seat surprises.

Free

$0

Run the engine on a public repo, in your own environment.

  • 1 assessment / month
  • 3 of the 7 artefacts · watermarked
  • Local execution — your source never leaves
  • ≈US$2 inference cap per account
Start free

Daylight

US$249 / mo
≈A$380 / mo

For an active deal-doer who needs the full report.

  • 5 assessments / month
  • All 7 artefacts, no watermark
  • Local execution
  • Email support
Buy now
Most chosen

Meridian

US$999 / mo
≈A$1,520 / mo

For a fund running diligence across a pipeline.

  • 25 assessments / month
  • Multi-seat
  • AI-agent fix scripts
  • Hosted execution (per-scan approval gate)
Buy now

Apex

from US$2,499 / mo
≈A$3,800 / mo

For an acquirer or a high-volume buy-side platform.

  • Unlimited assessments
  • SSO / SAML (post-GA)
  • Self-host option
  • Named customer success manager
Talk to us

Trust

Built for code you can't risk leaking

How Firstlight handles your source — the short version.

Read-only, one repo

Private repos are pulled with a short-lived, fine-grained GitHub token scoped read-only to that single repository — used to clone, then dropped.

Local mode: nothing leaves

On the free-tier local backend the assessment runs inside your own infrastructure, against your own AI plan. Your source never reaches us.

No training, ever

Customer source is confidential — never logged in cleartext, never used to train a model, never replayed to a third party without your explicit consent.

Hosted runs: scratch only

A hosted run uses an ephemeral scratch workspace, destroyed when it finishes. The audit record stores a hash of the repo — never the content.

Human-approved provisioning

Every hosted run is provisioned only after a human approval step — no scan touches cloud resources until it's been waved through. NZ-domiciled — Millwater Consulting.

Trust

Built for code you can't risk leaking

How Firstlight handles your source — the short version.

Read-only, one repo

Private repos are pulled with a short-lived, fine-grained GitHub token scoped read-only to that single repository — used to clone, then dropped.

Local mode: nothing leaves

On the free-tier local backend the assessment runs inside your own infrastructure, against your own AI plan. Your source never reaches us.

No training, ever

Customer source is confidential — never logged in cleartext, never used to train a model, never replayed to a third party without your explicit consent.

Hosted runs: scratch only

A hosted run uses an ephemeral scratch workspace, destroyed when it finishes. The audit record stores a hash of the repo — never the content.

Human-approved provisioning

Every hosted run is provisioned only after a human approval step — no scan touches cloud resources until it's been waved through. NZ-domiciled — Millwater Consulting.

Trust

Built for code you can't risk leaking

How Firstlight handles your source — the short version.

Read-only, one repo

Private repos are pulled with a short-lived, fine-grained GitHub token scoped read-only to that single repository — used to clone, then dropped.

Local mode: nothing leaves

On the free-tier local backend the assessment runs inside your own infrastructure, against your own AI plan. Your source never reaches us.

No training, ever

Customer source is confidential — never logged in cleartext, never used to train a model, never replayed to a third party without your explicit consent.

Hosted runs: scratch only

A hosted run uses an ephemeral scratch workspace, destroyed when it finishes. The audit record stores a hash of the repo — never the content.

Human-approved provisioning

Every hosted run is provisioned only after a human approval step — no scan touches cloud resources until it's been waved through. NZ-domiciled — Millwater Consulting.

FAQ

Common Questions

Everything you'd want to know before running an assessment.

How does Firstlight get access to my code?
Do you train on my code?
What is the difference between local and hosted runs?
How is this different from a hands-on technical-DD engagement?
What do I actually get from a run?

FAQ

Common Questions

Everything you'd want to know before running an assessment.

How does Firstlight get access to my code?
Do you train on my code?
What is the difference between local and hosted runs?
How is this different from a hands-on technical-DD engagement?
What do I actually get from a run?

FAQ

Common Questions

Everything you'd want to know before running an assessment.

How does Firstlight get access to my code?
Do you train on my code?
What is the difference between local and hosted runs?
How is this different from a hands-on technical-DD engagement?
What do I actually get from a run?

Start with a free assessment

One free assessment every month — your code never leaves your environment.

Start with a free assessment

One free assessment every month — your code never leaves your environment.

Start with a free assessment

One free assessment every month — your code never leaves your environment.