FAQ & Troubleshooting
This section addresses frequently asked questions (FAQs) about Project KARL and provides guidance for troubleshooting common issues that developers or users might encounter during integration or usage.
Frequently Asked Questions (FAQs)
-
Q: Does Project KARL send any user data to the cloud or external servers?
A: No. KARL is designed as a privacy-first, local-only AI library. All learning, data processing, and model storage occur exclusively on the user's device by default. There is no data egress to any external servers unless an application developer explicitly builds such functionality on top of KARL (which would be outside KARL's core design). -
Q: How does KARL learn without pre-trained models or cloud data?
A: KARL employs incremental, on-device learning. EachKarlContainer
starts with a basic, unadapted model (or random initialization). It learns and adapts solely based on the sequence ofInteractionData
provided by the host application for that specific user. Personalization builds over time with continued usage. -
Q: What kind of data is suitable for
InteractionData
?
A: Focus on metadata about user interactions rather than sensitive personal content. Examples include: type of action performed (e.g., "button_clicked", "file_opened"), features used, settings changed, timestamps, and contextual information (e.g., "current_mode: editing"). Avoid logging raw text input, passwords, or personally identifiable information (PII) not essential for the intended local learning task. See the Designing Interaction Data section for more details. -
Q: How "smart" can KARL get on a local device?
A: The level of intelligence depends on several factors: the quality and relevance ofInteractionData
, the complexity of the model architecture used in theLearningEngine
implementation (e.g., MLP vs. RNN), the amount of user interaction, and device processing capabilities. KARL aims for meaningful personalization and pattern recognition within the constraints of on-device resources, rather than attempting to replicate the capabilities of massive cloud-based models. -
Q: Can I use my own machine learning models with KARL?
A: Yes, by implementing thecom.karl.core.api.LearningEngine
interface. You can wrap any Kotlin-compatible ML library or custom model logic within this interface. KARL provides modules like:karl-kldl
as a reference implementation. -
Q: How is the learned AI state (
KarlContainerState
) stored securely?
A: TheDataStorage
interface defines how state is persisted. The responsibility for secure storage, including encryption at rest, lies with the specificDataStorage
implementation (e.g.,:karl-room
). It is highly recommended that implementations use robust encryption (like SQLCipher for SQLite) and leverage platform-specific secure key management. See Encryption Implementation Details -
Q: What happens if the user clears application data or uninstalls the app?
A: All locally stored KARL data, including the learned AI state and any cached interaction history, will be deleted as per standard operating system behavior for application data removal. KARL does not persist data outside the application's designated storage areas. -
Q: Is Project KARL suitable for real-time predictions?
A: Yes, on-device inference (viagetPrediction()
) is designed to be fast. However, performance depends on the model complexity and device capabilities. For very demanding real-time scenarios, careful model optimization is necessary.
Troubleshooting Common Issues
-
Issue: Build Failure - "Unresolved reference" to KARL core types (
LearningEngine
,InteractionData
, etc.) in implementation modules (e.g.,:karl-kldl
,:karl-room
).
Cause: Gradle dependency misconfiguration. The implementation module is likely not correctly depending on:karl-core
, or:karl-core
is not correctly exposing its common artifacts for its JVM target.
Solution:- Ensure
:karl-core/build.gradle.kts
defines ajvm()
target in itskotlin { ... }
block. - Ensure
:karl-core
'scommonMain
dependencies (likekotlinx-coroutines-core
, if its API uses coroutine types) are declared withapi(...)
if they need to be transitive. - In the dependent module (e.g.,
:karl-kldl
), ensure itscommonMain
dependencies includeapi(project(":karl-core"))
. - Perform a thorough Gradle clean (
./gradlew clean
, potentially delete.gradle
and.idea
folders) and re-sync/rebuild. - Verify all import statements in your Kotlin files are correct.
Refer to the
dependency setup guide
and ensure your module configurations align. - Ensure
-
Issue: Build Failure - KSP errors in
:karl-room
(e.g., "MissingType", "Unresolved reference: kspJvm").
Cause: Incorrect KSP plugin setup, version mismatch, or KSP not seeing types from:karl-core
.
Solution:- Ensure the
com.google.devtools.ksp
plugin is applied in:karl-room/build.gradle.kts
and its version (insettings.gradle.kts
orlibs.versions.toml
) matches your Kotlin version (e.g., Kotlin1.9.23
uses KSP1.9.23-1.0.19
). - Ensure
androidx.room:room-compiler
is added as a KSP dependency to the correct target configuration (e.g.,kspJvm("androidx.room:room-compiler:VERSION")
withinjvmMain.dependencies
). - Ensure
:karl-room
'scommonMain
hasapi(project(":karl-core"))
andapi("androidx.room:room-common:VERSION")
. - Verify all Room-annotated classes (@Entity, @Dao, @Database, @TypeConverter) are
correctly defined within the
:karl-room
module and have correct imports.
Review the
Room setup
in the Getting Started guide or example project. - Ensure the
-
Issue: KARL is not making any predictions or predictions seem random.
Cause: Insufficient learning data, incorrectInteractionData
format/features, issues in theLearningEngine
model or training logic, or theDataSource
not emitting data correctly.
Solution:- Verify your
DataSource
is correctly implemented and successfully sendingInteractionData
to theKarlContainer
(use print statements or logging in yourDataSource
andKLDLLearningEngine.trainStep()
for debugging). - Ensure the
InteractionData
'stype
anddetails
map contain meaningful features relevant to the prediction task. - Check the console logs from your
LearningEngine
implementation duringtrainStep()
for any errors or warnings. - The AI needs time and sufficient diverse interactions to learn. Initial predictions will be naive.
- Consider the complexity of the task vs. the simplicity of the default model
(e.g., the simple MLP in
:karl-kldl
). More complex patterns may require a more sophisticated model architecture.
- Verify your
-
Issue: Application performance degrades after integrating KARL.
Cause: Learning steps (trainStep
) or predictions (getPrediction
) might be too computationally intensive for the device or running on the main UI thread.
Solution:- Ensure all calls to
KarlContainer
methods that perform significant work (initialize
,saveState
,getPrediction
, and the internaltrainStep
triggered byDataSource
) are executed on background threads. KARL is designed with suspend functions and returnsJob
s to facilitate this, using theCoroutineScope
you provide. - Profile your
LearningEngine
'strainStep
andpredict
methods. - Optimize feature extraction logic within your
LearningEngine
. - Consider a simpler model architecture if the current one is too heavy.
- For
trainStep
, theLearningEngine
implementation might need to queue updates or perform them less frequently if individual steps are too costly.
- Ensure all calls to
-
Issue: Saved AI state (
KarlContainerState
) doesn't seem to load correctly after app restart.
Cause: Problems in theDataStorage
implementation's save/load logic, issues with state serialization/deserialization in theLearningEngine
, or database schema/version mismatches (if using Room/SQLite).
Solution:- Verify that
karlContainer.saveState()
is being called reliably before the application closes or the container is released. - Debug the
saveContainerState
andloadContainerState
methods in yourDataStorage
implementation. Check for any I/O errors or exceptions. - Critically review the serialization (e.g., in
LearningEngine.getCurrentState()
) and deserialization (inLearningEngine.initialize()
) logic for the model's state. Ensure it's robust. - If using Room, check for database migration issues if you've changed your entity
schemas. Ensure
exportSchema = true
is set and you handle migrations correctly.
- Verify that
Performance Considerations and Tips
-
Asynchronous Operations: Always use the provided
CoroutineScope
and launch KARL operations (especiallytrainStep
viaDataSource
,initialize
,saveState
,getPrediction
) on appropriate background dispatchers (e.g.,Dispatchers.IO
for disk/DB,Dispatchers.Default
for CPU-intensive model updates). -
Efficient Feature Extraction: The process of converting
InteractionData
into numerical features for your model should be as efficient as possible, as it runs frequently. -
Model Complexity: Balance model complexity with on-device resource constraints (CPU, memory). Simpler models (like MLPs) train and predict faster but might capture less complex patterns. More complex models (RNNs, Transformers) are more powerful but more demanding.
-
trainStep
Frequency: If individual training steps are computationally noticeable, yourLearningEngine
might need internal logic to batch updates or train less frequently (e.g., after N interactions or on an idle timer) rather than on every single interaction. -
State Serialization: Efficient serialization/deserialization of
KarlContainerState
is important for quick app startup (loading state) and shutdown (saving state). Choose an efficient format (e.g., Protobuf, FlatBuffers, or optimized Keras model saving if using KotlinDL) over very verbose ones like uncompressed JSON for large model states. -
Profile: Use profiling tools (IntelliJ IDEA profiler, Android Studio profiler if targeting Android) to identify performance bottlenecks within your KARL integration or specific engine/storage implementations.
Debugging the Learning or Prediction Process
-
Logging: Add extensive logging (e.g., using a simple
println
for development, or a proper logging library like SLF4J/Logback for JVM) within:- Your
DataSource
implementation (to see whatInteractionData
is being sent). - Your
LearningEngine
'strainStep
andpredict
methods (to see input features, model outputs, confidence scores). - Your
DataStorage
implementation (to verify saving and loading).
- Your
-
Inspect Stored State: If using SQLite (via Room or SQLDelight), use a database browser tool to inspect the contents of the local database files to see what
InteractionData
orKarlContainerState
is being stored. -
Simplified Test Cases: Create minimal test scenarios with very predictable sequences of
InteractionData
to verify if the model is learning basic patterns as expected. -
Model Summary: If using KotlinDL, use
model.summary()
to print the architecture of your neural network and ensure it's configured as intended. -
Isolate Components: Test each KARL component (
LearningEngine
,DataStorage
) with mock inputs/dependencies in unit tests to verify their individual logic before integrating them into the fullKarlContainer
.
If you encounter persistent issues not covered here, please feel free to open an issue
with
detailed information about the problem on
our GitHub
repository