Healthcare Exchange Security: SLS Explained

by Grace Chen

A software developer spent just 15 minutes building a functional application using artificial intelligence, then refined it over two days without writing a single line of code. This demonstrates the rapidly evolving potential of AI-assisted development, even as questions about its reliability and impact linger.

AI Powers Rapid App Development, But Bugs Still Bite

New tools are letting developers create software with unprecedented speed, but human oversight remains crucial.

  • GitHub Copilot, powered by Claude Sonnet 4.5, was used to create two applications with minimal human coding.
  • The first app, a SAMHSA ValueSet viewer, addresses a limitation in existing software for handling large datasets.
  • The second, a Security Labeling Service reference implementation, required more complex features and data integration.
  • Testing revealed bugs that were quickly fixed with AI assistance, highlighting the iterative nature of this development process.
  • The developer emphasizes the importance of verifying AI-generated code and ensuring its functionality.

The developer, John Moehrke of Moehrke Research LLC, initially tasked the AI with creating a github.io app to view the contents of ValueSets from the Substance Abuse and Mental Health Services Administration (SAMHSA). He explained that existing software couldn’t properly display these large datasets, and a requested feature upgrade was denied. “All I did was ask co-pilot to make me an application that can use a FHIR defined $expand operation against the tx.fhir.org server, for a list of ValueSets by url; and display the results,” he said.

Initial testing by his family quickly uncovered a bug related to a “Check All Sizes” feature. Moehrke simply described the issue to the AI, and it promptly provided a fix, adding just five minutes to the development time.

The second project, a Security Labeling Service (SLS) reference implementation, was more ambitious. Moehrke created a GitHub repository and provided a brief description of his goals in a README.md file. The AI successfully generated a working application on the first attempt, despite the developer initially failing to specify crucial details like the need for a Docker-deployable server and FHIR $operation compliance.

He noted the AI intelligently started with sample ValueSet bundles and data, providing a reasonable foundation for further development. Much of the subsequent time was spent ensuring the SLS functioned correctly, requiring complex ValueSets and data derived from work within the SHIFT-Task-Force. He initially separated data use-cases from the SLS and ValueSets to improve build speed, recognizing that the ValueSets were the primary bottleneck.

The process revealed errors in existing data tagging and the need to properly indicate the “topic” of each ValueSet – essentially categorizing the sensitive data it contained. For example, a ValueSet (A) might have a topic of “BH” (behavioral health) and include relevant codes from LOINC, SNOMED, and ICD.

Further testing with larger ValueSets and data continued to uncover bugs and inspire new features. “Next up is to see if my kids can break this,” Moehrke said, adding that any fixes he makes will be visible on GitHub.

Interestingly, Moehrke’s household has a divided opinion on AI. While his children express strong dislike for the technology, he remains cautiously optimistic, acknowledging its potential while remaining aware of its limitations and potential risks. “I am very suspicious, I have seen it really mess up, and I have seen the movies enough to worry about what it might do. But I choose to work with it in order to make it better at helping humans.”

You may also like

Leave a Comment