{"id":53863,"date":"2025-09-15T11:28:16","date_gmt":"2025-09-15T01:28:16","guid":{"rendered":"https:\/\/www.cloudproinc.com.au\/?p=53863"},"modified":"2025-09-15T11:28:19","modified_gmt":"2025-09-15T01:28:19","slug":"practical-ways-to-fine-tune-llms","status":"publish","type":"post","link":"https:\/\/cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/","title":{"rendered":"Practical ways to fine-tune LLMs"},"content":{"rendered":"\n<p>In this blog post Practical ways to fine-tune LLMs and choosing the right method we will walk through what fine-tuning is, when you should do it, the most useful types of fine-tuning, and a practical path to ship results.<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>Large language models are astonishingly capable, but out of the box they still reflect general internet behavior. The fastest way to get them working for your domain\u2014your tone, your policies, your data\u2014is fine-tuning. Practical ways to fine-tune LLMs and choosing the right method starts with high-level choices, then dives into the technology behind them and the steps to implement.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-high-level-view-first\">High-level view first<\/h2>\n\n\n\n<p>Fine-tuning adjusts a pre-trained model so it performs better on your tasks. You can think of it as teaching an already fluent writer your company style guide and procedures. Sometimes you need full retraining on all parameters; more often, you only tweak a small set of additional parameters so training is faster, cheaper, and safer.<\/p>\n\n\n\n<p>If prompts and retrieval (<a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/category\/rag\/\">RAG<\/a>) get you 80% of the way, fine-tuning is typically how you close the last gap: consistent formatting, policy adherence, and reduced hallucinations on familiar tasks.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-technology-in-brief\">The technology in brief<\/h2>\n\n\n\n<p>Modern <a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/category\/llm\/\">LLMs <\/a>are transformer networks. They learn by minimizing loss via gradient descent across billions of parameters. Fine-tuning re-runs this process, but instead of learning language from scratch, it nudges the model toward your objectives.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Full fine-tuning<\/strong> updates all weights\u2014powerful but costly and risky for catastrophic forgetting.<\/li>\n\n\n\n<li><strong>PEFT (Parameter-Efficient Fine-Tuning)<\/strong> adds a small number of trainable parameters on top of frozen base weights. Methods like LoRA and QLoRA are the industry default because they keep memory and cost manageable.<\/li>\n\n\n\n<li><strong>Preference optimization<\/strong> (e.g., RLHF, DPO) aligns outputs with human preferences, improving helpfulness, tone, and safety.<\/li>\n\n\n\n<li><strong>Prompt\/prefix tuning<\/strong> learns tiny \u201csoft prompts\u201d that steer behavior without modifying the base model.<\/li>\n<\/ul>\n\n\n\n<p>Quantization (e.g., 4-bit) shrinks memory use during training and inference. QLoRA combines 4-bit quantization with LoRA adapters so you can fine-tune multi-billion-parameter models on a single modern GPU.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-when-to-fine-tune-vs-alternatives\">When to fine-tune vs alternatives<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Try prompting first<\/strong> if your task is straightforward and data is scarce.<\/li>\n\n\n\n<li><strong>Use RAG<\/strong> when answers depend on frequently changing proprietary knowledge. It\u2019s simpler to update a document index than to re-train a model.<\/li>\n\n\n\n<li><strong>Fine-tune<\/strong> when you need consistent format, policy compliance, task-specific reasoning, or your prompts are getting too long and fragile.<\/li>\n\n\n\n<li><strong>Combine RAG + fine-tuning<\/strong> for the best of both worlds: retrieval for facts, fine-tuning for behavior and formatting.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-types-of-fine-tuning-you-should-know\">Types of fine-tuning you should know<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-1-supervised-fine-tuning-sft\">1) Supervised Fine-Tuning (SFT)<\/h3>\n\n\n\n<p>Train on input-output pairs to imitate desired responses. Great for instruction-following, style, and deterministic workflows (e.g., support macros, form filling).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-2-parameter-efficient-fine-tuning-peft\">2) Parameter-Efficient Fine-Tuning (PEFT)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>LoRA<\/strong>: Injects low-rank matrices into attention\/projection layers. Trains a small set of parameters while keeping the base model frozen.<\/li>\n\n\n\n<li><strong>QLoRA<\/strong>: Same idea, but with 4-bit quantization for memory efficiency. Enables 7B\u201370B models on a single high-memory GPU or a few smaller ones.<\/li>\n\n\n\n<li><strong>Adapters<\/strong>: Adds small modules between layers; flexible, composable.<\/li>\n\n\n\n<li><strong>Prefix\/Prompt Tuning<\/strong>: Learns trainable soft prompts. Ultra-lightweight, best when tasks are closely related.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-3-full-fine-tuning\">3) Full fine-tuning<\/h3>\n\n\n\n<p>Updates all parameters. Use when the model must deeply internalize a new domain or architecture-specific behavior and you have substantial high-quality data and budget.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-4-preference-optimization\">4) Preference optimization<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>RLHF (PPO)<\/strong>: Trains a reward model from human rankings, then optimizes the policy model. Powerful but complex to run.<\/li>\n\n\n\n<li><strong>DPO\/IPO\/ORPO\/KTO<\/strong>: Newer, simpler methods that directly learn from preference pairs without a separate reward model. Often easier for teams to adopt.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-5-specialized-fine-tunes\">5) Specialized fine-tunes<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Domain adaptation<\/strong>: Legal, medical, finance corpora.<\/li>\n\n\n\n<li><strong>Task specialization<\/strong>: SQL generation, code review, redaction.<\/li>\n\n\n\n<li><strong>Safety\/guardrails<\/strong>: Reinforce refusal style and policy adherence.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-to-choose-the-right-approach\">How to choose the right approach<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Small dataset (1k\u201320k examples)<\/strong>: SFT with LoRA\/QLoRA. Add preference tuning if tone\/politeness or safety matters.<\/li>\n\n\n\n<li><strong>Medium dataset (20k\u2013200k)<\/strong>: QLoRA or adapters. Consider DPO to enforce preferences, and RAG if knowledge changes often.<\/li>\n\n\n\n<li><strong>Large dataset (>200k)<\/strong>: Evaluate if full fine-tuning is worth the cost; strong evals and overfitting safeguards required.<\/li>\n\n\n\n<li><strong>Strict latency\/cost<\/strong>: Prefer smaller base models with LoRA, distillation, or quantization at inference.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-a-practical-workflow\">A practical workflow<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Define success<\/strong>: What metric moves the business? Exact match, BLEU\/ROUGE for structure, win-rate vs baseline, task completion time, hallucination rate, latency, cost.<\/li>\n\n\n\n<li><strong>Data strategy<\/strong>: Collect high-signal examples. Deduplicate, redact sensitive data, normalize formats. For chat tasks, use a consistent schema (system, user, assistant turns).<\/li>\n\n\n\n<li><strong>Model selection<\/strong>: Choose a base model size that fits your latency and budget. Prefer models with permissive licenses if you deploy on-prem.<\/li>\n\n\n\n<li><strong>Pick a method<\/strong>: Start with LoRA\/QLoRA for most cases. Use DPO when human preference matters.<\/li>\n\n\n\n<li><strong>Train<\/strong>: Start with conservative learning rates, small ranks (e.g., r=8\u201316 for LoRA), short epochs. Watch validation loss and sample quality.<\/li>\n\n\n\n<li><strong>Evaluate<\/strong>: Use automatic metrics plus human review. Compare against prompt-only and RAG baselines.<\/li>\n\n\n\n<li><strong>Safety checks<\/strong>: Test jailbreaks, PII leakage, policy adherence, and unintended bias. Add refusal patterns to training if needed.<\/li>\n\n\n\n<li><strong>Deploy<\/strong>: Merge adapters if needed, quantize for inference, and monitor drift. Set up canarying and rollback.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-minimal-code-example-with-lora-and-qlora\">Minimal code example with LoRA and QLoRA<\/h2>\n\n\n\n<p>The snippet below shows supervised fine-tuning with LoRA using the Transformers and PEFT libraries. Adapt paths and hyperparameters to your setup and ensure you have the appropriate model license.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>pip install -U transformers datasets peft accelerate bitsandbytes<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code has-white-color has-black-background-color has-text-color has-background has-link-color wp-elements-0a4ff8b25cc09ff608008c307e256f96\"><code>from datasets import load_dataset\nfrom transformers import (AutoTokenizer, AutoModelForCausalLM, Trainer,\n                          TrainingArguments, DataCollatorForLanguageModeling,\n                          BitsAndBytesConfig)\nfrom peft import LoraConfig, get_peft_model, TaskType\n\n# Choose a base model that fits your hardware and license\nmodel_name = \"your-org\/your-base-model\"  # e.g., a 7B\u20138B instruction model\n\ntokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)\nif tokenizer.pad_token is None:\n    tokenizer.pad_token = tokenizer.eos_token\n\n# QLoRA 4-bit config (comment out to use standard LoRA without quantization)\nbnb_config = BitsAndBytesConfig(\n    load_in_4bit=True,\n    bnb_4bit_quant_type=\"nf4\",\n    bnb_4bit_compute_dtype=\"bfloat16\",\n    bnb_4bit_use_double_quant=True,\n)\n\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_name,\n    device_map=\"auto\",\n    quantization_config=bnb_config,  # remove for standard LoRA\n)\n\n# LoRA configuration\nlora = LoraConfig(\n    task_type=TaskType.CAUSAL_LM,\n    r=16,\n    lora_alpha=32,\n    lora_dropout=0.05,\n    target_modules=&#91;\"q_proj\", \"v_proj\"]  # adjust by architecture\n)\nmodel = get_peft_model(model, lora)\n\n# Your dataset should contain `prompt` and `response` fields\n# Example: a JSONL file with instruction\/response pairs\n# {\"prompt\": \"Summarize...\", \"response\": \"...\"}\n\nds = load_dataset(\"json\", data_files={\"train\": \"train.jsonl\", \"val\": \"val.jsonl\"})\n\nEOS = tokenizer.eos_token\n\ndef format_example(ex):\n    text = f\"&lt;|user|&gt;\\n{ex&#91;'prompt']}\\n&lt;|assistant|&gt;\\n{ex&#91;'response']}{EOS}\"\n    return {\"text\": text}\n\n\nds = ds.map(format_example)\n\ndef tokenize(batch):\n    return tokenizer(batch&#91;\"text\"], truncation=True, max_length=2048)\n\nds = ds.map(tokenize, batched=True, remove_columns=ds&#91;\"train\"].column_names)\n\ncollator = DataCollatorForLanguageModeling(tokenizer, mlm=False)\n\nargs = TrainingArguments(\n    output_dir=\".\/ft-lora\",\n    per_device_train_batch_size=2,\n    per_device_eval_batch_size=2,\n    gradient_accumulation_steps=8,\n    num_train_epochs=2,\n    learning_rate=2e-4,\n    logging_steps=20,\n    evaluation_strategy=\"steps\",\n    eval_steps=200,\n    save_steps=200,\n    save_total_limit=3,\n    bf16=True,\n    report_to=\"none\"\n)\n\ntrainer = Trainer(\n    model=model,\n    args=args,\n    train_dataset=ds&#91;\"train\"],\n    eval_dataset=ds&#91;\"val\"],\n    data_collator=collator\n)\n\ntrainer.train()\n\n# Optionally merge LoRA weights for standalone deployment\n# model = model.merge_and_unload()\n# model.save_pretrained(\".\/ft-merged\")\n# tokenizer.save_pretrained(\".\/ft-merged\")\n<\/code><\/pre>\n\n\n\n<p>Tips: keep sequence lengths as short as your task allows, monitor overfitting with early stopping, and always compare with a prompt-only baseline.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-evaluation-that-actually-drives-decisions\">Evaluation that actually drives decisions<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Task metrics<\/strong>: Exact match, F1, ROUGE\/BLEU, JSON schema validity, SQL executability, toxicity\/off-policy rate.<\/li>\n\n\n\n<li><strong>Human win-rate<\/strong>: Sample pairs (baseline vs fine-tuned) and ask raters to vote blindly.<\/li>\n\n\n\n<li><strong>Robustness<\/strong>: Paraphrase tests, out-of-domain prompts, adversarial cases.<\/li>\n\n\n\n<li><strong>Latency and cost<\/strong>: Median and P95 latency, tokens per second, memory footprint.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-safety-and-compliance\">Safety and compliance<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Redact PII in training data; add compliance examples to SFT where needed.<\/li>\n\n\n\n<li>Include refusal exemplars for disallowed topics and verify with red-teaming.<\/li>\n\n\n\n<li>Run automated audits for toxicity, bias, and data leakage.<\/li>\n\n\n\n<li>Document data sources, licenses, and model changes for auditability.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-production-tips\">Production tips<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Keep adapters separate<\/strong> so you can hot-swap versions per use case.<\/li>\n\n\n\n<li><strong>Quantize for inference<\/strong> (8-bit\/4-bit) when latency\/cost matter; measure quality impact.<\/li>\n\n\n\n<li><strong>RAG for freshness<\/strong>: Update your index daily; fine-tune behavior, not facts that change weekly.<\/li>\n\n\n\n<li><strong>Guardrails<\/strong> at runtime: schema-constrained decoding, content filters, and timeouts.<\/li>\n\n\n\n<li><strong>Monitor drift<\/strong>: Log prompts, outputs, and feedback. Retrain on new edge cases monthly or quarterly.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-common-pitfalls-to-avoid\">Common pitfalls to avoid<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Overfitting on small, noisy datasets\u2014use validation holdouts and early stopping.<\/li>\n\n\n\n<li>Training on your test set\u2014create a clean, untouched evaluation split.<\/li>\n\n\n\n<li>Ignoring baselines\u2014prove that fine-tuning beats prompt engineering and RAG-only.<\/li>\n\n\n\n<li>Too-long contexts\u2014prune templates; long contexts cost more and may not help.<\/li>\n\n\n\n<li>Mismatched objectives\u2014if you need preference alignment, SFT alone may disappoint; add DPO or RLHF.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-a-quick-chooser-guide\">A quick chooser guide<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Formatting and tone issues<\/strong>: Start with SFT + LoRA.<\/li>\n\n\n\n<li><strong>Safety and style alignment<\/strong>: Add preference tuning (DPO\/IPO).<\/li>\n\n\n\n<li><strong>Rapidly changing facts<\/strong>: RAG plus a small SFT for behavior.<\/li>\n\n\n\n<li><strong>Strict resource limits<\/strong>: Prompt\/prefix tuning or tiny adapters.<\/li>\n\n\n\n<li><strong>Deep domain shift with lots of data<\/strong>: Consider full fine-tuning\u2014plan for cost and careful evals.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-wrapping-up\">Wrapping up<\/h2>\n\n\n\n<p>Fine-tuning lets you turn a general LLM into your company\u2019s specialist. Start small with LoRA or QLoRA on a focused dataset, measure rigorously, and iterate. For many teams, this blend of parameter-efficient training, strong evaluation, and runtime guardrails delivers the best quality-to-cost ratio.<\/p>\n\n\n\n<p>If you want a pragmatic path: define success, build a clean 5\u201320k example dataset, run SFT with LoRA, compare against prompt\/RAG baselines, and only then consider preference tuning or larger models. That\u2019s how you move from demos to dependable production systems.<\/p>\n\n\n\n<ul class=\"wp-block-yoast-seo-related-links yoast-seo-related-links\">\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/\">What is Supervised Fine-Tuning (SFT)<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/06\/how-to-code-and-build-a-gpt-large-language-model\/\">How to Code and Build a GPT Large Language Model<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/04\/29\/how-to-protect-your-openai-net-apps-from-prompt-injection-attacks-with-azure-ai-foundry\/\">Protect Your OpenAI .NET Apps from Prompt Injection Attacks<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/04\/25\/openai-gpt-image-1-blazor-net-image-generator-web-app\/\">OpenAI GPT-Image-1 Blazor .NET Image Generator Web App<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/unleash-the-full-potential-of-microsoft-intune-with-cpi-consulting\/\">Unleash the Full Potential of Microsoft Intune with CPI Consulting<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>A practical guide to LLM fine-tuning methods, when to use them, and how to implement LoRA and QLoRA with solid evaluation and safety steps.<\/p>\n","protected":false},"author":1,"featured_media":53868,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"Practical ways to fine-tune LLMs","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"Explore practical ways to fine-tune LLMs to enhance performance tailored to your domain and specific needs.","_yoast_wpseo_opengraph-title":"","_yoast_wpseo_opengraph-description":"","_yoast_wpseo_twitter-title":"","_yoast_wpseo_twitter-description":"","_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[24,13,77],"tags":[],"class_list":["post-53863","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-blog","category-llm"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.4) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Practical ways to fine-tune LLMs - CPI Consulting<\/title>\n<meta name=\"description\" content=\"Explore practical ways to fine-tune LLMs to enhance performance tailored to your domain and specific needs.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Practical ways to fine-tune LLMs\" \/>\n<meta property=\"og:description\" content=\"Explore practical ways to fine-tune LLMs to enhance performance tailored to your domain and specific needs.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/\" \/>\n<meta property=\"og:site_name\" content=\"CPI Consulting\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-15T01:28:16+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-09-15T01:28:19+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cloudproinc.com.au\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method-1024x683.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"683\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"CPI Staff\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"CPI Staff\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/\"},\"author\":{\"name\":\"CPI Staff\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#\\\/schema\\\/person\\\/192eeeb0ce91062126ce3822ae88fe6e\"},\"headline\":\"Practical ways to fine-tune LLMs\",\"datePublished\":\"2025-09-15T01:28:16+00:00\",\"dateModified\":\"2025-09-15T01:28:19+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/\"},\"wordCount\":1298,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/#primaryimage\"},\"thumbnailUrl\":\"\\\/wp-content\\\/uploads\\\/2025\\\/09\\\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png\",\"articleSection\":[\"AI\",\"Blog\",\"LLM\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/\",\"url\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/\",\"name\":\"Practical ways to fine-tune LLMs - CPI Consulting\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/#primaryimage\"},\"thumbnailUrl\":\"\\\/wp-content\\\/uploads\\\/2025\\\/09\\\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png\",\"datePublished\":\"2025-09-15T01:28:16+00:00\",\"dateModified\":\"2025-09-15T01:28:19+00:00\",\"description\":\"Explore practical ways to fine-tune LLMs to enhance performance tailored to your domain and specific needs.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/#primaryimage\",\"url\":\"\\\/wp-content\\\/uploads\\\/2025\\\/09\\\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png\",\"contentUrl\":\"\\\/wp-content\\\/uploads\\\/2025\\\/09\\\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png\",\"width\":1536,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/practical-ways-to-fine-tune-llms\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Practical ways to fine-tune LLMs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#website\",\"url\":\"https:\\\/\\\/cloudproinc.com.au\\\/\",\"name\":\"Cloud Pro Inc - CPI Consulting Pty Ltd\",\"description\":\"Cloud, AI &amp; Cybersecurity Consulting | Melbourne\",\"publisher\":{\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/cloudproinc.com.au\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#organization\",\"name\":\"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd\",\"url\":\"https:\\\/\\\/cloudproinc.com.au\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"\\\/wp-content\\\/uploads\\\/2022\\\/01\\\/favfinalfile.png\",\"contentUrl\":\"\\\/wp-content\\\/uploads\\\/2022\\\/01\\\/favfinalfile.png\",\"width\":500,\"height\":500,\"caption\":\"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd\"},\"image\":{\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#\\\/schema\\\/person\\\/192eeeb0ce91062126ce3822ae88fe6e\",\"name\":\"CPI Staff\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g\",\"caption\":\"CPI Staff\"},\"sameAs\":[\"http:\\\/\\\/www.cloudproinc.com.au\"],\"url\":\"https:\\\/\\\/cloudproinc.com.au\\\/index.php\\\/author\\\/cpiadmin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Practical ways to fine-tune LLMs - CPI Consulting","description":"Explore practical ways to fine-tune LLMs to enhance performance tailored to your domain and specific needs.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/","og_locale":"en_US","og_type":"article","og_title":"Practical ways to fine-tune LLMs","og_description":"Explore practical ways to fine-tune LLMs to enhance performance tailored to your domain and specific needs.","og_url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/","og_site_name":"CPI Consulting","article_published_time":"2025-09-15T01:28:16+00:00","article_modified_time":"2025-09-15T01:28:19+00:00","og_image":[{"width":1024,"height":683,"url":"https:\/\/cloudproinc.com.au\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method-1024x683.png","type":"image\/png"}],"author":"CPI Staff","twitter_card":"summary_large_image","twitter_misc":{"Written by":"CPI Staff","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/#article","isPartOf":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/"},"author":{"name":"CPI Staff","@id":"https:\/\/cloudproinc.com.au\/#\/schema\/person\/192eeeb0ce91062126ce3822ae88fe6e"},"headline":"Practical ways to fine-tune LLMs","datePublished":"2025-09-15T01:28:16+00:00","dateModified":"2025-09-15T01:28:19+00:00","mainEntityOfPage":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/"},"wordCount":1298,"commentCount":0,"publisher":{"@id":"https:\/\/cloudproinc.com.au\/#organization"},"image":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png","articleSection":["AI","Blog","LLM"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/","url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/","name":"Practical ways to fine-tune LLMs - CPI Consulting","isPartOf":{"@id":"https:\/\/cloudproinc.com.au\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/#primaryimage"},"image":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png","datePublished":"2025-09-15T01:28:16+00:00","dateModified":"2025-09-15T01:28:19+00:00","description":"Explore practical ways to fine-tune LLMs to enhance performance tailored to your domain and specific needs.","breadcrumb":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/#primaryimage","url":"\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png","contentUrl":"\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png","width":1536,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.cloudproinc.com.au\/"},{"@type":"ListItem","position":2,"name":"Practical ways to fine-tune LLMs"}]},{"@type":"WebSite","@id":"https:\/\/cloudproinc.com.au\/#website","url":"https:\/\/cloudproinc.com.au\/","name":"Cloud Pro Inc - CPI Consulting Pty Ltd","description":"Cloud, AI &amp; Cybersecurity Consulting | Melbourne","publisher":{"@id":"https:\/\/cloudproinc.com.au\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/cloudproinc.com.au\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/cloudproinc.com.au\/#organization","name":"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd","url":"https:\/\/cloudproinc.com.au\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cloudproinc.com.au\/#\/schema\/logo\/image\/","url":"\/wp-content\/uploads\/2022\/01\/favfinalfile.png","contentUrl":"\/wp-content\/uploads\/2022\/01\/favfinalfile.png","width":500,"height":500,"caption":"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd"},"image":{"@id":"https:\/\/cloudproinc.com.au\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/cloudproinc.com.au\/#\/schema\/person\/192eeeb0ce91062126ce3822ae88fe6e","name":"CPI Staff","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g","caption":"CPI Staff"},"sameAs":["http:\/\/www.cloudproinc.com.au"],"url":"https:\/\/cloudproinc.com.au\/index.php\/author\/cpiadmin\/"}]}},"jetpack_featured_media_url":"\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png","jetpack-related-posts":[{"id":56966,"url":"https:\/\/cloudproinc.com.au\/index.php\/2026\/02\/05\/detecting-backdoors-in-open-weight-llms\/","url_meta":{"origin":53863,"position":0},"title":"Detecting Backdoors in Open-Weight LLMs","author":"CPI Staff","date":"February 5, 2026","format":false,"excerpt":"Open-weight language models can hide \u201csleeper\u201d behaviors that only appear under specific triggers. Here\u2019s a practical, team-friendly workflow to test, detect, and reduce backdoor risk before production.","rel":"","context":"In &quot;Blog&quot;","block_context":{"text":"Blog","link":"https:\/\/cloudproinc.com.au\/index.php\/category\/blog\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2026\/02\/post-9.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2026\/02\/post-9.png 1x, \/wp-content\/uploads\/2026\/02\/post-9.png 1.5x, \/wp-content\/uploads\/2026\/02\/post-9.png 2x, \/wp-content\/uploads\/2026\/02\/post-9.png 3x, \/wp-content\/uploads\/2026\/02\/post-9.png 4x"},"classes":[]},{"id":53867,"url":"https:\/\/cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/","url_meta":{"origin":53863,"position":1},"title":"Alpaca vs Phi-3 for Fine-Tuning","author":"CPI Staff","date":"September 15, 2025","format":false,"excerpt":"A practical comparison of Alpaca and Microsoft Phi-3 for instruction fine-tuning, with clear guidance, code snippets, and a decision checklist for teams balancing accuracy, cost, and compliance.","rel":"","context":"In &quot;Blog&quot;","block_context":{"text":"Blog","link":"https:\/\/cloudproinc.com.au\/index.php\/category\/blog\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 1x, \/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 1.5x, \/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 2x, \/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 3x, \/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 4x"},"classes":[]},{"id":53864,"url":"https:\/\/cloudproinc.com.au\/index.php\/2025\/09\/15\/preparing-input-text-for-training-llms\/","url_meta":{"origin":53863,"position":2},"title":"Preparing Input Text for Training LLMs","author":"CPI Staff","date":"September 15, 2025","format":false,"excerpt":"Practical steps to clean, normalize, chunk, and structure text for training and fine-tuning LLMs, with clear explanations and runnable code.","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/cloudproinc.com.au\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 1x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 1.5x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 2x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 3x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 4x"},"classes":[]},{"id":53573,"url":"https:\/\/cloudproinc.com.au\/index.php\/2025\/08\/06\/how-to-code-and-build-a-gpt-large-language-model\/","url_meta":{"origin":53863,"position":3},"title":"How to Code and Build a GPT Large Language Model","author":"CPI Staff","date":"August 6, 2025","format":false,"excerpt":"In this blog post, you\u2019ll learn how to code and build a GPT LLM from scratch or fine-tune an existing one. We\u2019ll cover the architecture, key tools, libraries, frameworks, and essential resources to get you started fast. Table of contentsUnderstanding GPT LLM ArchitectureModel Architecture DiagramTools and Libraries to Build a\u2026","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/cloudproinc.com.au\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/08\/CreateLLM.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/08\/CreateLLM.png 1x, \/wp-content\/uploads\/2025\/08\/CreateLLM.png 1.5x, \/wp-content\/uploads\/2025\/08\/CreateLLM.png 2x, \/wp-content\/uploads\/2025\/08\/CreateLLM.png 3x, \/wp-content\/uploads\/2025\/08\/CreateLLM.png 4x"},"classes":[]},{"id":53741,"url":"https:\/\/cloudproinc.com.au\/index.php\/2025\/08\/29\/step-back-prompting-explained-and-why-it-beats-zero-shot-for-llms\/","url_meta":{"origin":53863,"position":4},"title":"Step-back prompting explained and why it beats zero-shot for LLMs","author":"CPI Staff","date":"August 29, 2025","format":false,"excerpt":"Learn what step-back prompting is, why it outperforms zero-shot, and how to implement it with practical templates and quick evaluation methods.","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/cloudproinc.com.au\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 1x, \/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 1.5x, \/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 2x, \/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 3x, \/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 4x"},"classes":[]},{"id":53520,"url":"https:\/\/cloudproinc.com.au\/index.php\/2025\/07\/21\/running-pytorch-in-microsoft-azure-machine-learning\/","url_meta":{"origin":53863,"position":5},"title":"Running PyTorch in Microsoft Azure Machine Learning","author":"CPI Staff","date":"July 21, 2025","format":false,"excerpt":"This post will walk you through what PyTorch is, how it's used in ML and LLM development, and how you can start running it in Azure ML using Jupyter notebooks. If you're working on deep learning, computer vision, or building large language models (LLMs), you've probably come across PyTorch. But\u2026","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/cloudproinc.com.au\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/05\/Add-bootstrap-logo.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/05\/Add-bootstrap-logo.png 1x, \/wp-content\/uploads\/2025\/05\/Add-bootstrap-logo.png 1.5x, \/wp-content\/uploads\/2025\/05\/Add-bootstrap-logo.png 2x, \/wp-content\/uploads\/2025\/05\/Add-bootstrap-logo.png 3x, \/wp-content\/uploads\/2025\/05\/Add-bootstrap-logo.png 4x"},"classes":[]}],"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/posts\/53863","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/comments?post=53863"}],"version-history":[{"count":2,"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/posts\/53863\/revisions"}],"predecessor-version":[{"id":53878,"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/posts\/53863\/revisions\/53878"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/media\/53868"}],"wp:attachment":[{"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/media?parent=53863"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/categories?post=53863"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/tags?post=53863"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}