library(rlmstudio)
lms_installed <- has_lms()
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)The rlmstudio package bridges the gap between R and
local Large Language Models by wrapping the LM Studio CLI and its REST
API. This vignette covers the GUI Workflow, which is
best for visual users on desktop environments like macOS, Windows, and
Linux desktops.
While the R package provides functions to manage the entire lifecycle of a local LLM, the LM Studio desktop application provides an excellent visual search function for finding new models and exploring advanced configurations beyond what the API can currently do. You can seamlessly mix and match: use the GUI to discover and tweak models, and use R to automate your chatting and data processing.
This package relies on the LM Studio CLI. If you do not have LM Studio installed or need to update your version, the package provides a convenient setup function.
For desktop users, you can run
install_lmstudio(method = "browser") in your console to
open the official download page.
You have two options for starting the local server. You can open the LM Studio desktop application, navigate to the Developer or Local Server tab, and click “Start”. Alternatively, you can start it directly from R.
The LM Studio GUI shines when it comes to discovering models. You can use its built-in search bar to browse Hugging Face, filter by compatibility, and select specific quantizations.
However, if you already know the exact identifier of the model you want, you can download it and manage your inventory directly from R.
# Download a model using its identifier
model <- "google/gemma-3-1b"
job_id <- lms_download(model)
#> ℹ Initiating download for model: "google/gemma-3-1b"...
#> ✔ Initiating download for model: "google/gemma-3-1b"... [724ms]
#>
#> ✔ Model "google/gemma-3-1b" is already downloaded.
#> ✔ Initiating download for model: "google/gemma-3-1b"... [973ms]
#> ✔ Download job started successfully. Job ID: "job_02c8a1f86e"
lms_download_status(job_id)
#>
#> ── Download Job: "N/A"
#> Status: already_downloaded
#> ── Download Job: "job_02c8a1f86e"
#> Status: completed
#> Progress: 100% (0.72 GB / 0.72 GB)Before you can chat with a model, you must load it into system
memory. We include the optional flash_attention = TRUE
argument here, which speeds up processing and reduces memory usage on
supported hardware.
Interact with the model by sending it text prompts. The
lms_chat() function takes a few key arguments to guide the
AI’s response:
input: This is your main message or question for the
model.
system_prompt: This is an optional set of background
instructions. You use it to tell the AI how to behave, what role to
play, or how to format its answers (like asking it to act as an expert R
programmer).
Note: The rlmstudio package currently processes each chat request independently. Every time you run the lms_chat() function, you are starting a brand new conversation. The model will not remember previous messages or context from earlier in your R script.
response <- lms_chat(
model = model,
input = "Say hello!",
system_prompt = "Answer in rhymes."
)
cat(response)
#> Hello there, friend so true,
#> A simple greeting for you!
#>
#> Hello there, let's have some fun,
#> Let’s say hello, the victory won!
#>
#> Hello there, with a joyful plea,
#> A friendly greeting, you’ll agree!
#> A friendly face, so bright and new,
#> Hello there, it’s waiting for you!
#>
#> Let's chat and have a joyful spree,
#> Hello there, happy as can be!To free up memory and system resources when you are finished, it is best practice to unload your models and stop the local server. Closing the LM Studio GUI will also perform this cleanup if you forget.
# Unload the model
lms_unload(model)
#> ℹ Unloading model: "google/gemma-3-1b"...
#> ✔ Model "google/gemma-3-1b" unloaded successfully. [427ms]
#>
#> ✔ Model "google/gemma-3-1b" unloaded successfully. [431ms]
# Stop the server
lms_server_stop()
#> ✔ LM Studio server stopped successfully.
#> ✔ LM Studio server stopped successfully.