FAQ & Troubleshooting
This section addresses frequently asked questions (FAQs) about Project KARL and provides guidance for troubleshooting common issues that developers or users might encounter during integration or usage.
Frequently Asked Questions (FAQs)
No. KARL is designed as a privacy-first, local-only AI library. All learning, data processing, and model storage occur exclusively on the user's device by default. There is no data egress to any external servers unless an application developer explicitly builds such functionality on top of KARL (which would be outside KARL's core design).
KARL employs incremental, on-device learning. Each KarlContainer
starts with a basic, unadapted model (or random initialization). It learns and
adapts solely based on the sequence of InteractionData
provided
by the host application for that specific user. Personalization builds over time
with
continued usage.
Focus on metadata about user interactions rather than sensitive personal content. Examples include: type of action performed (e.g., "button_clicked", "file_opened"), features used, settings changed, timestamps, and contextual information (e.g., "current_mode: editing"). Avoid logging raw text input, passwords, or personally identifiable information (PII) not essential for the intended local learning task. See the Designing Interaction Data section for more details.
The level of intelligence depends on several factors: the quality and relevance
of InteractionData, the complexity of the model architecture used
in
the LearningEngine implementation (e.g., MLP vs. RNN), the amount
of
user interaction, and device processing capabilities. KARL aims for meaningful
personalization and pattern recognition within the constraints of on-device
resources, rather than attempting to replicate the capabilities of massive
cloud-based models.
Yes, by implementing the com.karl.core.api.LearningEngine interface.
You can wrap any Kotlin-compatible ML library or custom model logic within this
interface. KARL provides modules like :karl-kldl as a reference
implementation.
The DataStorage interface defines how state is persisted. The
responsibility for secure storage, including encryption at rest, lies with the
specific DataStorage implementation (e.g.,
:karl-room). It
is highly recommended that implementations use robust encryption (like SQLCipher
for
SQLite) and leverage platform-specific secure key management. See Encryption
Implementation Details
All locally stored KARL data, including the learned AI state and any cached interaction history, will be deleted as per standard operating system behavior for application data removal. KARL does not persist data outside the application's designated storage areas.
Yes, on-device inference (via getPrediction()) is designed to be
fast. However, performance depends on the model complexity and device
capabilities.
For very demanding real-time scenarios, careful model optimization is necessary.
Troubleshooting Common Issues
-
Issue: Build Failure - "Unresolved reference" to KARL core types (
LearningEngine,InteractionData, etc.) in implementation modules (e.g.,:karl-kldl,:karl-room).
Cause: Gradle dependency misconfiguration. The implementation module is likely not correctly depending on:karl-core, or:karl-coreis not correctly exposing its common artifacts for its JVM target.
Solution:- Ensure
:karl-core/build.gradle.ktsdefines ajvm()target in itskotlin { ... }block. - Ensure
:karl-core'scommonMaindependencies (likekotlinx-coroutines-core, if its API uses coroutine types) are declared withapi(...)if they need to be transitive. - In the dependent module (e.g.,
:karl-kldl), ensure itscommonMaindependencies includeapi(project(":karl-core")). - Perform a thorough Gradle clean (
./gradlew clean, potentially delete.gradleand.ideafolders) and re-sync/rebuild. - Verify all import statements in your Kotlin files are correct.
Refer to the
dependency setup guideand ensure your module configurations align. - Ensure
-
Issue: Build Failure - KSP errors in
:karl-room(e.g., "MissingType", "Unresolved reference: kspJvm").
Cause: Incorrect KSP plugin setup, version mismatch, or KSP not seeing types from:karl-core.
Solution:- Ensure the
com.google.devtools.kspplugin is applied in:karl-room/build.gradle.ktsand its version (insettings.gradle.ktsorlibs.versions.toml) matches your Kotlin version (e.g., Kotlin1.9.23uses KSP1.9.23-1.0.19). - Ensure
androidx.room:room-compileris added as a KSP dependency to the correct target configuration (e.g.,kspJvm("androidx.room:room-compiler:VERSION")withinjvmMain.dependencies). - Ensure
:karl-room'scommonMainhasapi(project(":karl-core"))andapi("androidx.room:room-common:VERSION"). - Verify all Room-annotated classes (@Entity, @Dao, @Database, @TypeConverter) are
correctly defined within the
:karl-roommodule and have correct imports.
Review the
Room setupin the Getting Started guide or example project. - Ensure the
-
Issue: KARL is not making any predictions or predictions seem random.
Cause: Insufficient learning data, incorrectInteractionDataformat/features, issues in theLearningEnginemodel or training logic, or theDataSourcenot emitting data correctly.
Solution:- Verify your
DataSourceis correctly implemented and successfully sendingInteractionDatato theKarlContainer(use print statements or logging in yourDataSourceandKLDLLearningEngine.trainStep()for debugging). - Ensure the
InteractionData'stypeanddetailsmap contain meaningful features relevant to the prediction task. - Check the console logs from your
LearningEngineimplementation duringtrainStep()for any errors or warnings. - The AI needs time and sufficient diverse interactions to learn. Initial predictions will be naive.
- Consider the complexity of the task vs. the simplicity of the default model
(e.g., the simple MLP in
:karl-kldl). More complex patterns may require a more sophisticated model architecture.
- Verify your
-
Issue: Application performance degrades after integrating KARL.
Cause: Learning steps (trainStep) or predictions (getPrediction) might be too computationally intensive for the device or running on the main UI thread.
Solution:- Ensure all calls to
KarlContainermethods that perform significant work (initialize,saveState,getPrediction, and the internaltrainSteptriggered byDataSource) are executed on background threads. KARL is designed with suspend functions and returnsJobs to facilitate this, using theCoroutineScopeyou provide. - Profile your
LearningEngine'strainStepandpredictmethods. - Optimize feature extraction logic within your
LearningEngine. - Consider a simpler model architecture if the current one is too heavy.
- For
trainStep, theLearningEngineimplementation might need to queue updates or perform them less frequently if individual steps are too costly.
- Ensure all calls to
-
Issue: Saved AI state (
KarlContainerState) doesn't seem to load correctly after app restart.
Cause: Problems in theDataStorageimplementation's save/load logic, issues with state serialization/deserialization in theLearningEngine, or database schema/version mismatches (if using Room/SQLite).
Solution:- Verify that
karlContainer.saveState()is being called reliably before the application closes or the container is released. - Debug the
saveContainerStateandloadContainerStatemethods in yourDataStorageimplementation. Check for any I/O errors or exceptions. - Critically review the serialization (e.g., in
LearningEngine.getCurrentState()) and deserialization (inLearningEngine.initialize()) logic for the model's state. Ensure it's robust. - If using Room, check for database migration issues if you've changed your entity
schemas. Ensure
exportSchema = trueis set and you handle migrations correctly.
- Verify that
Performance Considerations and Tips
-
Asynchronous Operations: Always use the provided
CoroutineScopeand launch KARL operations (especiallytrainStepviaDataSource,initialize,saveState,getPrediction) on appropriate background dispatchers (e.g.,Dispatchers.IOfor disk/DB,Dispatchers.Defaultfor CPU-intensive model updates). -
Efficient Feature Extraction: The process of converting
InteractionDatainto numerical features for your model should be as efficient as possible, as it runs frequently. -
Model Complexity: Balance model complexity with on-device resource constraints (CPU, memory). Simpler models (like MLPs) train and predict faster but might capture less complex patterns. More complex models (RNNs, Transformers) are more powerful but more demanding.
-
trainStepFrequency: If individual training steps are computationally noticeable, yourLearningEnginemight need internal logic to batch updates or train less frequently (e.g., after N interactions or on an idle timer) rather than on every single interaction. -
State Serialization: Efficient serialization/deserialization of
KarlContainerStateis important for quick app startup (loading state) and shutdown (saving state). Choose an efficient format (e.g., Protobuf, FlatBuffers, or optimized Keras model saving if using KotlinDL) over very verbose ones like uncompressed JSON for large model states. -
Profile: Use profiling tools (IntelliJ IDEA profiler, Android Studio profiler if targeting Android) to identify performance bottlenecks within your KARL integration or specific engine/storage implementations.
Debugging the Learning or Prediction Process
-
Logging: Add extensive logging (e.g., using a simple
printlnfor development, or a proper logging library like SLF4J/Logback for JVM) within:- Your
DataSourceimplementation (to see whatInteractionDatais being sent). - Your
LearningEngine'strainStepandpredictmethods (to see input features, model outputs, confidence scores). - Your
DataStorageimplementation (to verify saving and loading).
- Your
-
Inspect Stored State: If using SQLite (via Room or SQLDelight), use a database browser tool to inspect the contents of the local database files to see what
InteractionDataorKarlContainerStateis being stored. -
Simplified Test Cases: Create minimal test scenarios with very predictable sequences of
InteractionDatato verify if the model is learning basic patterns as expected. -
Model Summary: If using KotlinDL, use
model.summary()to print the architecture of your neural network and ensure it's configured as intended. -
Isolate Components: Test each KARL component (
LearningEngine,DataStorage) with mock inputs/dependencies in unit tests to verify their individual logic before integrating them into the fullKarlContainer.
If you encounter persistent issues not covered here, please feel free to open an issue
with
detailed information about the problem on
our GitHub
repository