rao-v 10 hours ago

I don’t really think this reflects the current era of challenges?

The “enforcement layer” is the hardest and most important part, and is barely addressed.

- is the answer structurally / syntactically valid?

- is it appropriately grounded and evidenced?

- is it accurate? In what ways does it fall short?

Each of these should be triggering an agent to rework and resubmit etc. or failing that a disclosure to the user about how the answer falls short and should be reviewed / remediated.

This feels like it’s from the era of trying to oneshot a good enough answer.

zihotki 2 hours ago

No numbers/measurements/benchmarks and you dare call it "a working" one? Any real proofs that this 'works'?

newsdeskx 9 hours ago

enforcement is the hard part. most context engineering stuff describes what should happen, not what actually stops it from happening. curious how your enforcement layer handles runtime checks vs just descriptive ones

slashdave 11 hours ago

> the information an AI system needs to produce accurate ... outputs

I would have stuck a qualifier in there

tmpz22 11 hours ago

Putting engineering after a term doesnt make it engineering.

  • slashdave 11 hours ago

    Probably just using the convention started by the term "prompt engineering", which is forgivable.

    • sroussey 11 hours ago

      not sure i forgive "prompt engineering"

  • jryio 10 hours ago

    Software engineering is certainly not engineering. Even at the highest levels. Real engineering have infinitely more complex interactions in the physical world than symbolic institutions for machines.

    • whattheheckheck 7 hours ago

      Thats right, no need to understand anything other than symbols on a machine. No people involved. No reality to model. No economics to think about. Nothing like real engineering. Thats for the big boys and girls