Transparency by Default

Open Engineering

Public roadmaps, open RFCs, and transparent artifacts. Our backbone for trust and accountability.

Everything defaults to open unless there's a reason not to.

Our Commitments

Transparency isn't a feature. It's how we operate.

Public Roadmaps

What we're working on, what's experimental, what's paused. Updated quarterly.

Open RFCs

Proposed changes, new research ideas, framework designs. Community input welcome.

Open Artifacts

Docs, diagrams, code, evaluation frameworks. Everything defaults to open.

2026 Roadmap

What we're building, quarter by quarter.

Last updated: February 2026

Q1 2026

Current

Foundational Research & Baseline

  • AI in Software Engineering — Baseline
  • Human-in-the-Loop Workflow Diagrams
  • Trust & Observability Metrics v1
  • DAIP Task Evaluation Template

Q2 2026

Experimental Prototypes

  • AI-assisted coding experiments
  • Human-in-loop pilot projects
  • Metrics validation report
  • First DAIP cycle launch

Q3 2026

Applied Research & Refinement

  • Cross-domain framework testing
  • Design patterns refinement
  • Trust scorecards
  • Second DAIP cycle

Q4 2026

Synthesis & Publications

  • AI-Assisted SE Handbook
  • Human-in-Loop Design Standards
  • AI Literacy Curriculum v1
  • Year 2 Roadmap RFCs

CompletedIn ProgressPlanned

Open RFCs

Proposed research, program changes, and framework designs. All major decisions are documented publicly.

RFC-001ActiveFebruary 2026

DAIP Evaluation Rubrics

Objective assessment criteria for DAIP participants across four dimensions.

RFC-002ActiveFebruary 2026

Foundation Charter

Mission, principles, governance, and public commitments of DevSimplex Foundation.

RFC-003ActiveFebruary 2026

Research Roadmap 2026

1-year research plan covering AI-assisted SE, human-in-loop systems, and trust.

Want to propose an RFC? Contact governance@devsimplex.org

Open Artifacts

All our documentation, research outputs, and tools in one place.

RFC-001Active

DAIP Evaluation Rubrics

Each DAIP participant is assessed across four dimensions aligned with the foundation's focus areas. Scoring scale: 1 (Minimal) to 5 (Exceptional).

Technical Competence (30%)

Quality, correctness, and completeness of work

Autonomy & Initiative (25%)

Ability to work independently within framework

Human-in-the-Loop Awareness (25%)

Understanding and integrating human oversight

Trust & Observability (20%)

Ability to instrument, monitor, and evaluate AI

Contribute to Open Engineering

Open engineering means community input. Review our RFCs, suggest improvements, or contribute to our open repositories.