Skip to content

Foundation

What is MBSE?

Model-Based Systems Engineering (MBSE) is the practice of using formal, linked digital models—requirements, architecture, behavior, interfaces—as the primary artifacts for analysis, verification, and collaboration. It reduces ambiguity from narrative-only specs, improves traceability from intent to evidence, and scales better for aerospace, automotive, defense, and other complex product lines where decisions must be defended under review.

Our approach

How VectorOWL carries MBSE

This is model-based systems engineering built for teams that already use AI: assistants and integrators need a single, versioned source of structural truth. VectorOWL does not park MBSE in one diagramming silo. The ontology holds logical relationships—including, when you adopt them, structured fields for computational-model identity, envelope, and credibility links; the vector layer retrieves over CFD, FEA, telemetry, and documents; Anchors encode non-negotiable constraints; and Model Context Protocol connects CAD, solvers, PLM, and AI hosts so updates propagate as governed context—not side-channel exports that agents cannot see.

Source of truth

Engineering intent and evidence roll forward in Git: reviewable diffs, branch isolation for experiments, and merge gates when trace matrices or anchor checks must pass.

Assistants and humans

The same graph-backed context powers human review and coding-agent workflows—so suggestions reference URIs, runs, and tool-fed attributes rather than orphaned prose.

Computational models

Characterization beside architecture

Systems architects live in requirements and structure; analysts live in meshes, ROMs, and lab data. Industry patterns (see the community Model Characterization Pattern v1.8.1 PDF) emphasize a portable wrapper for any computational model: intended use, applicability envelope, VVUQ posture, interoperability, lifecycle cost, and regulator-facing cues. VectorOWL treats that wrapper as first-class graph content alongside your system model—vectors help find similar models and evidence; Model Context Protocol keeps tool-generated artifacts attached to the right URIs.

Full narrative: Framework · computational models. Official Model Characterization Pattern (v1.8.1, MBSE “MCP”): PDF on OMG MBSE Wiki. Pattern hub: OMG MBSE Patterns.

Mechanisms

What the stack operationalizes

  1. Version-aligned modeling: ontology snapshots and configuration evolve with branches and pull requests; impact review applies to engineering artifacts, not only documents.
  2. Traceability: link requirements-style assertions to architecture elements, simulations, tests, and verification records in one navigable graph.
  3. Change awareness: surface affected nodes and tools when upstream assumptions or constraints move—before silent drift accumulates.
  4. Verification posture: anchors and recorded checks document what satisfied which claims, supporting audits and release discipline.
  5. Tool and AI integration: Model Context Protocol exposes structured context so engineering tools and assistants operate against the same substrate your team reviews.

Workflow

Example program flow

  1. Author structure and constraints in the ontology-backed model; branch for proposed changes.
  2. Ingest simulation, telemetry, and documents into the vector tier with links to relevant individuals and scenarios.
  3. Establish trace links from requirements-style claims through architecture to verification evidence.
  4. Synchronize discipline tools via Model Context Protocol context servers; reconcile identities through the registry pattern described in our architecture narrative.
  5. Iterate with assistants against the shared graph—always review merges as you would production schema changes.
  6. Run checks in CI where applicable: broken axioms, missing traces, anchor breaches.
  7. Merge and release with lineage intact: what shipped matches what the model and logs assert.

Outcomes

Why teams adopt this shape

Fewer stale handoffs

Models and evidence evolve in-repo instead of diverging across file shares and ticket comments.

Defensible decisions

Trace links and anchor logs support reviews in regulated domains without treating AI output as authority.

Faster alignment

One substrate for research, integration, and leadership narrative reduces contradictory interpretations of “current truth.”

Room for continuous data

Vectors complement OWL where meshes, streams, and corpora do not compress cleanly into predicates alone.

Install & use

Model Context Protocol runtime (required for AI/tool integration)

The stdio Model Context Protocol server vectorowl-mcp fronts vectorowld over gRPC. Register it in Claude Desktop, Cursor, or any MCP-capable host: that is how assistants and automation get tools and resources, not vibes. (Not the INCOSE “Model Characterization Pattern”—that names metadata for computational models; see framework overview.) Optional SKILL.md bundles teach vocabulary only—same stack, complementary layer.

Register the MCP server

Nothing in this marketing repository compiles vectorowl-mcp or vectorowld—get those from a VectorOWL build or your release channel. After the binaries exist, wire the host in order:

  1. Run vectorowld and note the gRPC listen address (commonly 127.0.0.1:50051). Tool calls fail if nothing is listening at VECTOROWL_GRPC_ENDPOINT.
  2. Open your host’s MCP config (one JSON file per application):
    • Cursor: ~/.cursor/mcp.json for all projects, or <repo-root>/.cursor/mcp.json for one checkout only.
    • Claude Desktop (Linux): ~/.config/Claude/claude_desktop_config.json (see technical page · Try MCP for macOS and Windows paths).
  3. Merge the snippet: the file must contain a top-level "mcpServers" object. Add or replace only the vectorowl-runtime key—leave other servers (filesystem, GitHub, etc.) untouched. If the file is empty or new, you can paste the whole block below.
  4. Set "command" to vectorowl-mcp if which vectorowl-mcp works; otherwise use the absolute path to your executable (or in Cursor, e.g. "${userHome}/.cargo/bin/vectorowl-mcp").
  5. Quit and reopen Cursor or Claude Desktop so MCP reloads.
{
  "mcpServers": {
    "vectorowl-runtime": {
      "type": "stdio",
      "command": "vectorowl-mcp",
      "args": [],
      "env": {
        "VECTOROWL_GRPC_ENDPOINT": "127.0.0.1:50051",
        "VECTOROWL_LOG_LEVEL": "info"
      }
    }
  }
}

Prefer a script that edits JSON for you without hand-merging? Use the copy-paste blocks under vectorowl.html · Try MCP. Full template file: vectorowl-mcp-skill/mcp-config.example.json.

If “none of this works,” check which repo you are in. This marketing site’s npm run build only produces static HTML for hosting—it does not compile vectorowl-mcp or vectorowld. Until you build those from the VectorOWL codebase (or install an internal release), MCP will fail with “command not found,” or gRPC connection errors when the runtime is not listening—expected until the runtime exists on your machine.

Runtime (vectorowld) + MCP bridge

Install vectorowld and vectorowl-mcp from your VectorOWL build or release artifact. Architecture and CLI notes: technical page · runtime section.

Optional: install SKILL.md (assistant vocabulary)

If your editor loads folder-based skills, one-liner from the repository root after clone (copies SKILL.md + references into Cursor’s skills path). Does not replace MCP registration.

chmod +x scripts/install-vectorowl-skill.sh && ./scripts/install-vectorowl-skill.sh --project

Project-local → .cursor/skills/vectorowl-neuro-symbolic-mbse/. Global:

chmod +x scripts/install-vectorowl-skill.sh && ./scripts/install-vectorowl-skill.sh

Typically ~/.cursor/skills/vectorowl-neuro-symbolic-mbse/. Restart Cursor and enable the skill. See vectorowl-mcp-skill/README.md for the MCP vs skill split.