knitr::opts_chunk$set(
collapse = TRUE, comment = "#>",
eval = identical(tolower(Sys.getenv("LLMR_RUN_VIGNETTES", "false")), "true") )
This vignette demonstrates:
build_factorial_experiments()
call_llm_par()
The workflow is: design → parallel execution → analysis
We will compare three configurations on two prompts, once unstructured and once with structured output. In choosing models, note that at the time of writing this vignette, Gemini models are not guaranteeing the schema output and is more likely to run into trouble.
setup_llm_parallel(workers = 10)
res_unstructured <- call_llm_par(experiments, progress = TRUE)
reset_llm_parallel()
res_unstructured |>
select(provider, model, user_prompt_label, response_text, finish_reason) |>
head()
Understanding the results:
The finish_reason
column shows why each response
ended:
"stop"
: normal completion"length"
: hit token limit (increase
max_tokens
)"filter"
: content filter triggeredThe user_prompt_label
helps track which experimental
condition produced each response.
schema <- list(
type = "object",
properties = list(
answer = list(type="string"),
keywords = list(type="array", items = list(type="string"))
),
required = list("answer","keywords"),
additionalProperties = FALSE
)
experiments2 <- experiments
experiments2$config <- lapply(experiments2$config, enable_structured_output, schema = schema)
setup_llm_parallel(workers = 10)
res_structured <- call_llm_par_structured(experiments2 , .fields = c("answer","keywords") )
reset_llm_parallel()
res_structured |>
select(provider, model, user_prompt_label, structured_ok, answer) |>
head()