A significant focus of our research has been building and releasing ML systems that work in the real-world, with the aim of gaining massive adoption, impacting industry and fundamentally influencing the design and architecture of such systems. Here are some key projects we have co-created:
- TextGrad: self-optimization of prompts and outputs of LLM programs
- LOTUS: query engine for processing structured and unstructured data with LLMs
- XGBoost: scalable, portable and distributed gradient boosting library
- Alpaca: small and cheap (<600$) instruction-following large-language model
- Apache TVM: end-to-end deep learning compiler stack for CPUs, GPUs and specialized accelerators
- LIME: explaining the predictions of any machine learning classifier
- Turi Create: simplifies the development of custom machine learning models
- MXNet: lightweight, portable, flexible distributed/mobile deep learning library (Apache incubated)
- Core ML Tools: converter tools for Apple’s Core ML framework
- SFrame: scalable tabular and graph data-structures built for out-of-core data analysis and machine learning
- GraphChi: large-scale graph computations on a single machine
- GraphLab and PowerGraph: framework for large-scale machine learning and graph computation
- Matlab Toolbox for Submodular Function Optimization