With curated content, efficient summarization techniques, and a user-friendly interface, ML Times is an efficient way for ML engineers to keep up with AI.
The Lambda Deep Learning Blog
Categories
- gpu-cloud (25)
- tutorials (24)
- benchmarks (22)
- announcements (19)
- lambda cloud (13)
- NVIDIA H100 (12)
- hardware (12)
- tensorflow (9)
- NVIDIA A100 (8)
- gpus (8)
- company (7)
- LLMs (6)
- deep learning (6)
- hyperplane (6)
- news (6)
- training (6)
- gpu clusters (5)
- CNNs (4)
- generative networks (4)
- presentation (4)
- research (4)
- rtx a6000 (4)
Recent Posts
Lambda was selected as NVIDIA's 2024 AI Excellence Partner of the Year for providing NVIDIA-powered systems via our Cloud and on-prem offerings.
Published 03/19/2024 by Robert Brooks IV
The NVIDIA GB200 Superchip and NVIDIA B200 and B100 GPUs will be available through Lambda’s On-Demand & Reserved Cloud, and NVIDIA DGX SuperPODs.
Published 03/18/2024 by Maxx Garrison
Lambda will be at NVIDIA GTC in 2024. We will showcase our NVIDIA GH200 benchmarks, how our ML team uses Retrieval Augmented Generation (RAG), and more.
Published 03/12/2024 by Maxx Garrison
Lambda raised a $320M Series C for a $1.5B valuation, to expand our GPU cloud & further our mission to build the #1 AI compute platform in the world.
Published 02/15/2024 by Stephen Balaban
Maximize online research efficiency with ShadeRunner, a Chrome plugin featuring text highlighting, paragraph summarization, and topic suggestions.
Published 02/13/2024 by David Hartmann
Benchmarks comparing inference performance of the NVIDIA GH200 Grace Hopper Superchip, enhanced by ZeRO-Inference, to NVIDIA H100 and A100 Tensor Core GPUs.
Published 12/20/2023 by Chuan Li
Persistent storage is now available in all Lambda Cloud regions and for all on-demand instance types, including our NVIDIA H100 Tensor Core GPU instances.
Published 12/19/2023 by Kathy Bui
The Lambda Vector One is a single-GPU desktop PC built to tackle demanding AI/ML tasks, from fine-tuning Stable Diffusion to handling the complexities of Llama 2 7B.
Published 12/12/2023 by Samuel Park
Benchmarks on NVIDIA’s Transformer Engine, which boosts FP8 performance by an impressive 60% on GPT3-style model testing on NVIDIA H100 Tensor Core GPUs.
Published 11/21/2023 by Chuan Li
Lambda will be one of the first cloud providers in the world to offer customers access to NVIDIA H200 Tensor Core GPUs through Lambda Cloud Clusters.
Published 11/13/2023 by Maxx Garrison
Lambda Cloud Clusters are now available with the NVIDIA GH200 Grace Hopper Superchip. A single GH200 has 576GB of coherent memory for unmatched efficiency.
Published 11/13/2023 by Maxx Garrison
GPU benchmarks on Lambda’s offering of the NVIDIA H100 SXM5 vs the NVIDIA A100 SXM4 using DeepChat’s 3-step training example.
Published 10/12/2023 by Chuan Li