Portkey brings AI Guardrails on top of its open source AI Gateway
Portkey is filling a critical gap in productionizing AI apps - by bringing Guardrails on top of the AI Gateway, AI teams can now ship to production confidently.
There was a crucial piece missing when it came to productionizing AI systems - orchestrating the LLM's behavior based on guardrail verdicts on the inputs & outputs. We now solve it with this update!”
SAN FRANCISCO, CALIFORNIA, UNITED STATES, August 1, 2024 /EINPresswire.com/ -- LLMs are brittle - not just in API uptimes or their inexplicable 400/500 errors, but also in their core behavior. You can get a response with a 200 status code that completely errors out for your app's pipeline due to mismatched output. We've long thought about the problem of fixing LLM outputs at Portkey, and wondered if we could bring some of the Guardrail abstractions towards the Gateway directly. With Portkey's Guardrails, we now complete the loop on building robust & reliable AI apps that behave EXACTLY as you want, every time.— Rohit Agarwal
As Chip Huyen (VP of AI & OSS at Voltron Data) shared recently, Guardrails around a model Gateway solve 2 problems at once —
1. Identify/Fix faulty LLM outputs and
2. Orchestrate the request based on Guardrails results to actually make the app work.
We are teaming up with multiple AI Guardrail leaders in the industry to do exactly this - bring their SOTA AI Guardrails2 on top of Portkey's open source AI Gateway1, and make it incredibly easy for developers to use them in production. The Gateway orchestrates your LLM requests based on Guardrail results, and makes your app behave exactly as it should.
Vrushank Vyas
Portkey
+91 9700888848
email us here
Visit us on social media:
X
LinkedIn
YouTube
1 https://portkey.ai/
2 https://github.com/portkey-ai/gateway