Preparing Input Text for Training LLMs

Preparing Input Text for Training LLMs

In this blog post Preparing Input Text for Training LLMs that Perform in Production we will walk through the decisions and steps that make training data truly useful. Whether you’re pretraining from scratch or fine-tuning an existing model, disciplined data prep is...
Loading and Saving PyTorch Weights

Loading and Saving PyTorch Weights

In this blog post Best Practices for Loading and Saving PyTorch Weights in Production we will map out the practical ways to persist and restore your models without surprises. Whether you build models or manage teams shipping them, understanding how PyTorch saves...
Practical ways to fine-tune LLMs

Practical ways to fine-tune LLMs

In this blog post Practical ways to fine-tune LLMs and choosing the right method we will walk through what fine-tuning is, when you should do it, the most useful types of fine-tuning, and a practical path to ship results. Large language models are astonishingly...
Understanding Azure Phi-3

Understanding Azure Phi-3

In this blog post Understanding Azure Phi-3 and how to use it across cloud and edge we will unpack what Azure Phi-3 is, why it matters, and how you can put it to work quickly and safely. Think of Azure Phi-3 as a family of small, efficient language models designed to...
Create a Blank Neo4j Instance

Create a Blank Neo4j Instance

In this blog post Create a Blank Neo4j Instance Safely on Docker, VM, or Kubernetes we will walk through how to spin up a clean, secure Neo4j deployment on Docker, a Linux VM, or Kubernetes, and why each step matters. Create a Blank Neo4j Instance Safely on Docker,...