AI Integration Guide

RAG Scripture Retrieval with /slice (Python)

Use /api/topics to identify relevant verse coordinates, then /api/slice to retrieve a canonical Scripture chunk for LLM context — no reference parsing required.

/api/topics verse_index /api/slice /api/passage LLM

What You’ll Build

Topic-driven verse discovery via /api/topics
Canonical verse-index windows via /api/slice
Authoritative Scripture text passed verbatim to the LLM
Grounded LLM response anchored to canonical coordinates

Prerequisites: Python 3.9+, requests and groq libraries, a free BibleBridge API key, a free Groq API key.

pip install requests groq

Why /slice for RAG?

/api/slice retrieves canonical verse coordinates for any contiguous verse_index range — spanning chapter and book boundaries without any reference parsing. When another endpoint (such as /topics or /cross-references) returns a verse_index, you can build a precise context window around it and retrieve the full coordinate set before fetching text. This separates chunk discovery from text retrieval, which is useful for caching, deduplication, and building pre-indexed RAG stores.


1

Set environment variables

Store both API keys as environment variables — never hardcode them.

export BIBLEBRIDGE_API_KEY=
export GROQ_API_KEY=your_groq_key_here

2

Discover verse coordinates with /topics

Query a topic to get ranked verse coordinates. Each result includes a verse_index — the entry point for building your RAG chunk.

curl --get https://holybible.dev/api/topics \
  -H "Authorization: Bearer YOUR_API_KEY" \
  --data-urlencode topic=grace \
  --data limit=3

3

Build a canonical window with /slice

Take the top result’s verse_index and expand a window around it. /slice returns canonical coordinates for every verse in the range — across any chapter or book boundary.

curl --get https://holybible.dev/api/slice \
  -H "Authorization: Bearer YOUR_API_KEY" \
  --data start_index=29853 \
  --data end_index=29861

4

Full pipeline (Python)

The complete script: topic → slice → passage text → grounded LLM response.

import os
import requests
from groq import Groq

BIBLEBRIDGE_API_KEY = os.environ.get("BIBLEBRIDGE_API_KEY")
GROQ_API_KEY = os.environ.get("GROQ_API_KEY")

if not BIBLEBRIDGE_API_KEY:
    raise RuntimeError("Missing BIBLEBRIDGE_API_KEY.")
if not GROQ_API_KEY:
    raise RuntimeError("Missing GROQ_API_KEY.")

BASE = "https://holybible.dev/api"
headers = {"Authorization": f"Bearer {BIBLEBRIDGE_API_KEY}"}

# Step 1: Find relevant verse coordinates for a topic
topics_resp = requests.get(
    f"{BASE}/topics",
    params={"topic": "grace", "limit": 5},
    headers=headers,
    timeout=10
)
topics_resp.raise_for_status()
top_verses = topics_resp.json()["data"]

# Step 2: Build a canonical window around the top result
top_index = top_verses[0]["verse_index"]
window = 4

slice_resp = requests.get(
    f"{BASE}/slice",
    params={"start_index": top_index - window, "end_index": top_index + window},
    headers=headers,
    timeout=10
)
slice_resp.raise_for_status()
chunk = slice_resp.json()

start_index = chunk["start_index"]
end_index   = chunk["end_index"]
osis_id     = chunk["osis_id"]

# Step 3: Fetch verse text for the canonical window
passage_resp = requests.get(
    f"{BASE}/passage",
    params={"start_index": start_index, "end_index": end_index, "version": "KJV"},
    headers=headers,
    timeout=10
)
passage_resp.raise_for_status()
verses = passage_resp.json()["data"]

# Step 4: Build RAG context string
context = "\n".join(
    f"{v['book']['name']} {v['chapter']}:{v['verse']} — {v['text']}"
    for v in verses
)

print(f"RAG chunk: {osis_id} ({chunk['verse_count']} verses)")
print()
print(context)

# Step 5: Send to LLM
client = Groq(api_key=GROQ_API_KEY)

response = client.chat.completions.create(
    model="llama-3.3-70b-versatile",
    messages=[
        {
            "role": "system",
            "content": (
                "You answer Bible questions using only the provided Scripture. "
                "Do not cite verses not given to you."
            )
        },
        {
            "role": "user",
            "content": (
                f"Based only on this Scripture:\n\n{context}\n\n"
                "What does the Bible teach about grace?"
            )
        }
    ]
)

print("\nLLM Response:")
print(response.choices[0].message.content)

Example output

Topic: grace — top result: Ephesians 2:8 (verse_index 29857) — window ±4 verses:

RAG chunk — Eph.2.4-Eph.2.12

/slice result → /passage (KJV)
Ephesians 2:4
But God, who is rich in mercy…
Ephesians 2:5
Even when we were dead in sins…
Ephesians 2:6
And hath raised us up together…
Ephesians 2:7
That in the ages to come he might shew…
Ephesians 2:8 ▶ anchor
For by grace are ye saved through faith…
Ephesians 2:9
Not of works, lest any man should boast.
Ephesians 2:10
For we are his workmanship, created…
Ephesians 2:11
Wherefore remember, that ye being in time past…
Ephesians 2:12
That at that time ye were without Christ…

LLM response — grounded in the chunk above

llama-3.3-70b — grounded output
Based on this passage, the Bible teaches that grace is God’s unearned favor extended to humanity — not a reward for works but a gift that flows from his mercy and love. Salvation comes through faith in that grace, not through human effort, so that no one can boast. The purpose of this grace is not only rescue but transformation: those saved by grace are created for good works God prepared in advance.

The LLM is constrained to the provided chunk — no hallucinated verses, no unsourced citations.


Why separate /slice from text retrieval?

/slice returns canonical coordinates only — no text. This lets you inspect what is in a range, deduplicate across overlapping topic results, or pre-index chunk metadata before ever making a text call. Once you have the coordinates, /passage fetches text for any translation in one call.

Because /slice operates on the global verse_index, it traverses chapter and book boundaries naturally — a window around verse 29857 spans into adjacent chapters without any client-side boundary logic.


Common use cases

Pre-indexed RAG chunk stores keyed by verse_index range
Grounded LLM responses constrained to canonical Scripture
Semantic Bible search: topic → slice → embed → retrieve
Deduplicating overlapping passages from multiple topic results

Ready to build?

A free API key gives you 500 calls per day — no credit card required.

Get a free API key View API docs