Integrating KARL into Your Application
This section provides detailed guidance on effectively weaving Project KARL's capabilities into your application. It covers data design, container management, interaction flow, UI integration, and potential model customization.
Designing Your Application's Interaction Data
The quality and relevance of the data you feed into KARL are paramount to its learning
effectiveness. KARL learns from InteractionData
objects, which represent
metadata about user actions.
Identifying Relevant User Actions/Metadata
Carefully consider which user interactions provide meaningful signals for the type of personalization you want to achieve. Focus on metadata, not sensitive content.
-
Examples for a To-Do App: Task creation, completion, due date setting, priority changes, project assignment, filter usage.
-
Examples for a Code Editor: Commands executed, files opened/saved, frequently used snippets, refactoring actions (type, not content).
-
Key Principle: Select data points that, if learned, would allow KARL to make useful predictions or adaptations. Always prioritize user privacy; avoid logging detailed text input or file contents unless absolutely essential and with explicit user consent for a specific feature.
Encoding Data for KARL(Mapping events to InteractionData
)
Once relevant actions are identified, you need to map them to the
InteractionData
structure within your DataSource
implementation.
This involves defining a type
string and populating the details
map.
Conceptual Snippet for DataSource
:
// Inside your DataSource's event handling logic
val interaction = InteractionData(
userId = currentUserId,
type = "task_completed", // Clear, descriptive type
details = mapOf(
"priority" to task.priority, // e.g., "HIGH", 1
"project_category" to task.project?.category, // e.g., "WORK"
"time_of_day_segment" to "MORNING" // Derived feature
),
timestamp = System.currentTimeMillis()
)
// Pass 'interaction' to KARL via the onNewData callback
The LearningEngine
implementation (e.g., :karl-kldl
) will then need
to perform feature engineering on these InteractionData
objects to convert them
into numerical vectors suitable for the underlying ML model. This might involve techniques
like one-hot encoding for categorical data, numerical scaling, or creating embeddings.
For inspiration on structuring InteractionData
, see the types of events
processed in our example application's DataSource
Initializing & Managing a KARL Container
Proper management of the KarlContainer
lifecycle is crucial.
Creating a New Container for a User/Context
As shown in the Getting Started
guide, use the Karl.forUser(userId).build()
pattern to construct a
container instance. You must provide implementations for LearningEngine
,
DataStorage
, your application's DataSource
, and a
CoroutineScope
tied to the relevant lifecycle (e.g., user session,
ViewModel).
Key Snippet (from "Getting Started")
val karlContainer = Karl.forUser(userId)
.withLearningEngine(myEngineImpl)
.withDataStorage(myDataStorageImpl)
.withDataSource(myDataSourceImpl)
.withCoroutineScope(applicationManagedScope)
.build()
applicationManagedScope.launch {
karlContainer.initialize(...)
// Pass dependencies again for now
}
Loading and Saving Container State (Persistence across sessions)
The learned state of the AI (model weights, etc.) is encapsulated in
KarlContainerState
.
-
Loading: Occurs automatically during
karlContainer.initialize()
if a previous state exists for the user in theDataStorage
. -
Saving: Your application must call
karlContainer.saveState()
at appropriate times:-
Periodically during long sessions (e.g., after a certain number of interactions or time interval).
-
When the application is about to close or the user session ends. This is critical to persist the latest learning.
applicationManagedScope.launch { karlContainer.saveState().join() } // .join() if saving is critical before exit
-
The chosen DataStorage
implementation (e.g., :karl-room
)
handles the actual serialization and disk I/O.
Handling Multiple Containers (If applicable)
If your application supports multiple distinct users or isolated contexts on the same
device, you would create and manage a separate KarlContainer
instance for
each, identified by a unique userId
passed to
Karl.forUser(userId)
. Each container will have its own independent learned
state.
Feeding Data to KARL (Triggering the learning step)
This is primarily the role of your DataSource
implementation.
When and How DataSource
Provides Data
Your DataSource
implementation's observeInteractionData
method
will be called by the KarlContainer
during its initialization. Inside this
method, your application should:
-
Subscribe or listen to relevant internal application events that represent user interactions.
-
Upon receiving an event, transform it into an
InteractionData
object (as discussed in 4.1.2). -
Call the
onNewData: suspend (InteractionData) -> Unit
callback (provided by theKarlContainer
) with the newInteractionData
. This callback internally queues the data for processing by theLearningEngine
'strainStep()
.
The KarlContainer
handles invoking the LearningEngine
's
trainStep()
internally when new data is received from the
DataSource
via the onNewData
callback.
Asynchronous Processing (Using Kotlin Coroutines)
All KARL operations that might be long-running (initialization, saving state, training
steps, predictions) are designed to be suspend functions or return Jobs
. It
is crucial that your application:
-
Provides a
CoroutineScope
to theKarlContainer
that is tied to an appropriate lifecycle (e.g., ViewModel scope, application scope, user session scope). -
Launches calls to KARL's suspend functions (like
initialize
,getPrediction
,saveState
,reset
,release
) within this scope or another appropriate coroutine context to avoid blocking the main UI thread. -
The
onNewData
callback in yourDataSource
is a suspend function. TheKarlContainer
calls this from its own internal coroutine, and theLearningEngine.trainStep()
is also designed to be non-blocking or offload work.
Using KARL's Predictions (Triggering the inference step)
When and How to Call predict()
Your application requests a prediction by calling
suspend fun getPrediction(): Prediction?
on your KarlContainer
instance. You might do this:
-
In response to a specific user action (e.g., after the user types a command, to predict the next one).
-
When a particular screen or UI component becomes active, to personalize its content.
-
Periodically, if you want to proactively update a suggestion UI.
applicationManagedScope.launch
val currentPrediction = karlContainer.getPrediction
if (currentPrediction != null)
// Update UI or application logic
Interpreting Prediction Output (Prediction
data class)
The getPrediction()
method returns a nullable Prediction
object. This data class contains:
-
suggestion: String
: The primary suggested output. -
confidence: Float
: The model's confidence in this suggestion (typically 0.0 to 1.0). -
type: String
: A category for the prediction, helping your app understand how to use it. -
metadata: Map
: Optional additional data. Your application logic will use these fields to, for example, display the suggestion, decide whether to show it based on confidence, or perform different actions based on the prediction type.?
Integrating with UI Frameworks (e.g., Jetpack Compose)
If your application uses a declarative UI framework like Jetpack Compose, you can integrate KARL's outputs reactively.
Displaying Suggestions
Use Compose's state management (e.g., StateFlow
collected as state) to hold
the latest Prediction
. When the state updates, your Composable UI will
recompose to display the new suggestion.
Conceptual Snippet (ViewModel/StateHolder):
private val _karlPrediction = MutableStateFlow(null)
val karlPrediction: StateFlow = _karlPrediction.asStateFlow()
fun fetchKarlPrediction() {
viewModelScope.launch { // Assuming ViewModel scope
_karlPrediction.value = karlContainer.getPrediction()
}
}
val prediction by viewModel.karlPrediction.collectAsState()
if (prediction != null) { Text("KARL Suggests: ${prediction.suggestion}") }
Visualizing KARL's Learning Progress (The "maturity" UI element)
The :karl-compose-ui
module provides a
KarlLearningProgressIndicator
. To use this effectively, your application or
the LearningEngine
would need to expose a metric representing the model's
"maturity" or learning progress (e.g., number of interactions processed, a confidence
score trend). This is an advanced feature that requires careful design in the
LearningEngine
.
:karl-compose-ui
module for available components like KarlContainerUI
which
incorporates such an indicator.
Handling User Interaction with Suggestions
Provide UI elements for users to accept, reject, or ignore KARL's suggestions. This
feedback can itself be valuable InteractionData
to further refine KARL's
learning (e.g.,
InteractionData(type="suggestion_accepted",details=mapOf("suggestion_type" to prediction.type))
).
Customizing the AI Model
While KARL aims for ease of use, advanced users or specific applications might require model customization.
Choosing or Configuring the Model Architecture
The default LearningEngine
implementation (e.g., in :karl-kldl
)
might use a simple model like an MLP. Future versions or custom implementations could
allow:
-
Selecting different model types (RNNs, Transformers for sequence data) via configuration.
-
Adjusting layer sizes, number of layers, or other architectural parameters if the engine's constructor or factory methods expose them.
Currently, deep customization requires modifying the chosen LearningEngine
implementation module or creating your own.
Hyperparameters (Tuning the Learning Process)
Hyperparameters like learning rate, batch size (for mini-batch training if implemented),
or regularization factors significantly impact learning. If an
LearningEngine
implementation exposes these (e.g., via its constructor), you can tune them. However,
on-device hyperparameter tuning is complex; usually, sensible defaults are provided.
Advanced model customization is beyond the scope of basic integration and typically
involves delving into the source code of the specific LearningEngine
implementation module. The Project KARL contributor
documentation
provides more details on the internal architecture.