There were 1,499 press releases posted in the last 24 hours and 399,262 in the last 365 days.

John Snow Labs Launches Automated Testing for Responsible AI—the First No-Code Tool to Test and Evaluate Custom Language Models

Novel Capabilities Enable Domain Experts to Test AI Models for Fairness, Bias, Robustness, and Accuracy with Full Transparency and Reusability of Test Suites

LEWES, Del., July 25, 2024 (GLOBE NEWSWIRE) -- John Snow Labs, the AI for healthcare company, today announced the release of Automated Responsible AI Testing Capabilities in the Generative AI Lab. This is a first-of-its-kind no-code tool to test and evaluate the safety and efficacy of custom language models. It enables non-technical domain experts to define, run, and share test suites for AI model bias, fairness, robustness, and accuracy.

This capability is based on John Snow Labs’ open-source LangTest library, which includes more than 100 test types for different aspects of Responsible AI, from bias and security, to toxicity and political leaning. LangTest uses Generative AI to automatically generate test cases, making it practical to produce a comprehensive set of tests in minutes instead of weeks. Created specifically for testing custom AI models, LangTest accounts for those not covered by general purpose benchmarks and leaderboards.

Recent legislation in the US has made this kind of testing essential for companies looking to release new AI-based products and services, including:

  • The ACA Section 1557 Final Rule, which went into effect in June 2024, prohibiting discrimination in medical AI algorithms based on race, color, national origin, gender, age, or disability.
  • The HTI-1 Final Rule on transparency in medical decision support systems, which requires companies to show how they’ve trained and tested their models.
  • The American Bar Association Guidelines, requiring comprehensive internal and third-party audits prior to AI deployments in response to lawsuits against companies that provide models for automatically matching job descriptions with candidates’ resumes.

The need for a comprehensive testing solution for Large Language Models (LLMs) is urgent. Yet, many domain experts lack the technical expertise to do this. Similarly, many data scientists lack the domain expertise to build comprehensive, industry- and task-specific models. The Generative AI Lab enables domain experts to create, edit, and understand how a model is being tested without the need for a data scientist. The tool also embodies best practices such as versioning, sharing, and automated execution of tests for every new model.

“There has long been a gap between how AI models should be tested and how they often are. The new Generative AI Lab helps by making it far easier for teams to deliver AI models that are safe, effective, fair, and transparent,” said David Talby, CTO, John Snow Labs.

The software is available now for on-premise deployments as well as on the major public cloud marketplaces. To learn more and see this capability in action, join us for a webinar, “Automated Testing of Bias, Fairness, and Robustness of Language Models in the Generative AI Lab,” at 2pm ET on Wednesday, July 31.

About John Snow Labs
John Snow Labs, the AI for healthcare company, provides state-of-the-art software, models, and data to help healthcare and life science organizations put AI to good use. Developer of Spark NLP, Healthcare NLP, the Healthcare GPT LLM, the Generative AI Lab No-Code Platform, and the Medical Chatbot, John Snow Labs’ award-winning medical AI software powers the world’s leading pharmaceuticals, academic medical centers, and health technology companies. Creator and host of The NLP Summit, the company is committed to further educating and advancing the global AI community.

Contact
Gina Devine
John Snow Labs
gina@johnsnowlabs.com


Primary Logo

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.