RemyxAI's Backstage Pass: Sharing the AI Reasoning that Powers the Easiest Way to Customize Vision
Sign up for a complimentary custom computer vision model trained in minutes using our automated machine learning (AutoML) engine with synthetic data; no code or data is necessary with Remyx!
Recent breakthroughs in fine-tuning large language models (LLMs) have enabled millions to benefit from artificial intelligence in new and exciting ways. Compared to the previous generation of language models, modern techniques can bestow a remarkable capacity to follow user instructions. This ability can help developers simplify applications while providing a more flexible and personalized experience by enriching user prompts with business logic.
Starting with the powerful reasoning abilities of LLMs, we can further specialize the models to support users with everything bundled around a specific workflow. Indeed, one of the principal findings of InstructGPT, the research which gave rise to chatGPT, was that further specialization and alignment to user requests account for performance improvements over even much higher-capacity architectures with 100X more weights. Modern parameter-efficient fine-tuning methods using low-rank adapter modules, a la alpaca-lora, make specializing foundation models on as few as tens of thousands of samples a breeze.
The design space of model training techniques represents a rich frontier for advances in AI. Researchers have cataloged results for other design choices validated against benchmark datasets for many years, and now we can deliver last-mile model optimizations, specializing in your prediction task.
At Remyx, we use image and text generators to design datasets and apply autoML to simplify creating custom computer vision models which can meet the demands of efficient batch and real-time streaming inference over image and video.
LLMs and Explainable AI
Since our earliest Remyx Engine prototype, we’ve been applying LLMs to reason over user input, supporting more robust decisions in designing training datasets.
For an example of how we can benefit from reasoning over user input, consider the extreme case for “bass,” which includes eight distinct interpretations in WordNet. With no additional info, it is impossible to determine which visual representation was intended by the user. But with a little extra context of “animal” or “music,” we can disambiguate what the user means by “bass.” This additional context helps us design high-quality datasets by generating realistic scenes which feature the target concept.
While we can simplify applications with automated reasoning, another superpower of AI-enabled apps lies in the ability to educate the end user. For instance, researchers have showcased applications that enable users with code explanations and even undertake novel machine learning tasks!
Applying an LLM-based reasoning engine for explainable and interpretable AI excites beginners and experts alike. We see the clear potential to offer each an insight into the reasoning of our autoML engine. Revealing the execution plan in natural language to the user serves both an educational function and an intuitive sanity check.
That’s why we are pleased to share our latest update to the Remyx Model Engine. The dialogue model powering our chat assistant conveys the rationale behind our recommendations to the users. This new addition supports a broad spectrum of personas, from the beginner wishing to learn from our model’s reasoning to more opinionated users wanting the flexibility to customize the default execution plan.
With the feedback from our assistant, the ML novice might learn about mobile-friendly architectures used to support inference in real time. On the other hand, an expert might override the choice to use a MobileNetV2 backbone, opting instead to use EfficientNet or some Transformer-based model.
We can efficiently automate model selection using best-practice heuristics in computer vision while reasoning about user-provided context. Different deployment targets will impose various constraints on memory or choice of compute engine, typically restricting the model family. When considered from the outset, we can ensure that your custom, optimized model makes it onto the deployment target.
Automated reasoning starts with an LLM evaluation of a specialized prompt. To develop a personalized training program, we created a template that combines essential domain knowledge, several input-output exemplars, and user-provided context.
Remyx engine users express their preference in natural language, and we apply recipes tested to avoid compatibility issues in model conversion. LLM-based reasoning helps to select the most appropriate tradeoff in speed vs. accuracy for each user deployment. In time, we will curate a comprehensive mapping of hardware targets to model architectures tuned to your specific prediction task.
We work with text and image data, so offering well-reasoned explanations in text and showing our work visually is valuable! That’s why we’re adding a model dashboard for users to review training logs and model performance on validation data. This fundamental autoML feature streamlines model evaluation by highlighting key performance metrics, loss curves, and model info, and we strive to push the limits in automating its analysis.
We also want to make running your model a snap, so the dashboard view includes sample code snippets to get you started quickly. You can choose from several popular model download formats, and we’ve even made it easy for you to run your custom lightweight models directly in the web browser using tensorflow.js! This feature means you can validate the model on your images without sending your data anywhere for fast and private evaluation!
To recap, the landscape of AI development is evolving rapidly, and we are quickly incorporating the state-of-the-art to automate expert workflows. The pace of iteration in code development is increasing, and we at Remyx believe that reaching the best model for your application must keep in step.
The Remyx Model Engine represents our most ambitious attempt to democratize AI. With this latest upgrade, we’re one step closer to ML simplified for a 1000X larger audience.
Remyx Engine users enjoy privacy by design through training on our synthetically augmented, billion-scale image corpus; no user data is necessary to get started. And though this current version supports only image classifiers, without a competitive alternative in the model zoo, the Remyx Model Engine may be the best way to overcome the cold-start challenge facing innovators today.
Future feature updates will apply auto-labeling methods to expand our platform to more tasks like detection and segmentation. We are also developing few-shot learning techniques for the fast adaptation of models from limited user samples, so stay tuned!
We want to hear about your experiences as we strive to deliver a more personalized approach to model training. Let us know your thoughts by submitting feedback and training a complimentary model on us today just for signing up!