Lorekeeper Config
Configuration for the Lorekeeper mod.
Loaded from config/evermod.json on startup. If the file doesn't exist, a default configuration is created.
See also
for the service that uses this configuration
Constructors
Types
Properties
How often (in seconds) to re-read BlueMap markers. Default: 3600 (1 hour).
Whether the book market (polish/resale/royalties) is active
Number of recent messages to include in Jeeves chat context
Whether Jeeves chat participation is enabled
Markdown filename/path for the Jeeves intention checker prompt
Probability threshold (0-1) for Jeeves intention checks
Markdown filename/path for the Jeeves response system prompt
Whether the Lorekeeper chat listener (interviews/talks) is active
Whether Lorekeeper encounter event triggers are active
Whether the automatic weekly news scheduler is active
Whether Mario mode features are active on the server
Whether the server-side MCP HTTP endpoint is active
TCP port for the server-side MCP HTTP endpoint
Whether canonical Lorekeeper memory indexing is enabled
Whether memory index snapshots should sync to OpenAI Files storage
Maximum canonical memory records to include per uploaded archive snapshot
OpenAI Files purpose used for uploaded archive snapshots
Number of latest OpenAI archive snapshots to keep before pruning older files
Interval between periodic OpenAI Files archive sync attempts
Whether talk retrieval may use OpenAI archived memory as a cache fallback when local packets are empty
Whether talk retrieval should prefer memory-index packets
Whether the minigame framework (Plasmid game types) is active
Base URL for Notion API requests
Notion integration token
Notion property name for the lore author
Notion database ID to sync with
Whether Notion sync is enabled
Polling interval for Notion sync
Notion property name for the lore timestamp
Notion property name for the lore text (title property)
The API key for AI services
The API base URL (defaults to OpenAI's endpoint)
HTTP connect timeout in seconds
Whether AI features are enabled
Fallback API key
Fallback API base URL
Whether to use a fallback AI provider on failure
Fallback AI model to use
Fallback AI provider identifier
Per-request timeout for long history generation
High-quality AI model for complex tasks
Low-cost AI model for fast tasks
Per-request timeout for weekly gazette generation
The AI provider identifier (e.g., "openai")
Per-request timeout for reputation profile generation
Whether to run async startup AI knock-knock test
HTTP request timeout in seconds
Whether WATUT-style presence networking/visuals are active
Minimum characters for book submissions to score