Striveworks Introduces Valor, the Open-Source Tool for Evaluating Model Performance

Striveworks, a leading machine learning operations firm, has launched Valor, a state-of-the-art, open-source tool that revolutionizes model performance evaluation. Designed to streamline the model evaluation process for machine learning teams, Valor uses metadata and evaluation metrics to rank-order models, saving time and enhancing efficiency.

Striveworks Introduces Valor, the Open-Source Tool for Evaluating Model Performance
Austin, TX, February 22, 2024 --(PR.com)-- Striveworks, a leader in machine learning operations, is proud to announce the launch of Valor, an open-source solution that revolutionizes how data scientists and engineers evaluate machine learning models for their use cases.

Valor—a play on the words “evaluation store”—is a state-of-the-art tool designed to save ML teams time and effort in evaluating model performance. It provides answers to three critical questions for any organization using machine learning, and especially those running multiple machine learning pipelines:

Which model performs best on a given dataset?
How does the performance of a single model vary across datasets?
How do fine differences in data segments affect a model’s performance?

By drawing on metadata and evaluation metrics, Valor identifies how models perform on a given dataset. It then rank-orders models based on filter conditions, empowering machine learning teams to select the optimal models for their data pipelines.

“Valor was born out of the necessity to improve and standardize how we evaluate machine learning models,” says Eric Korman, Chief Science Officer at Striveworks. “We wanted a central service to compute, define, store, and discover metrics, keeping track of what exactly went into a model evaluation so we can trust the results.”

Data scientists and machine learning engineers can use Valor, free of charge, to address common challenges with models in production: understanding model performance per dataset, recognizing performance alterations between datasets, and confidently selecting optimal models for new pipelines. Valor makes it easy to discern nuanced differences in model performance, enabling machine learning teams to put the best models for their projects into production.

Valor is also the first major open-source project from Striveworks—an approach the company sees as strengthening the MLOps community as a whole.

“A modern evaluation service has been missing in the open-source MLOps tech stack, and we are excited and hopeful that Valor will fill this gap for the community,” says Korman. “This is just the beginning of model evaluation.”

Valor is now available to evaluate computer vision models for image segmentation, object detection, and image classification, as well as arbitrary models for classification tasks. Striveworks plans to support additional tasks and model types in the future.

Interested parties can now access Valor via the Striveworks GitHub repository at https://github.com/Striveworks/valor. All users are encouraged to engage with the Striveworks team through GitHub for ongoing feedback and collaboration.

About Striveworks
Striveworks is a pioneer in operational data science and machine learning for national security and other highly regulated spaces. Striveworks’ flagship machine learning operations (MLOps) platform, Chariot, is purpose-built to enable engineers and subject matter experts to transform their data into actionable insights. Striveworks was founded in 2018 and has been delivering software and ML solutions since 2019. By 2020, the National Security Commission for AI highlighted Striveworks as an exemplar of operational data science in its final report. In 2023, Striveworks was recognized on the Deloitte Technology Fast 500TM as one of North America’s fastest-growing companies in technology.
Contact
Striveworks
Tracy Shank
805-874-2650
https://www.striveworks.com
ContactContact
Categories