Android SDK for session replay and analytics tracking.
📖 For detailed setup instructions, see SETUP.md
The sample app requires configuration before running:
Option 1: Local Properties (Recommended)
Create local.properties in the project root:
OR_SERVER_URL=https://your-server.com/ingest
OR_PROJECT_KEY=your-project-keyOption 2: Environment Variables
export OR_SERVER_URL="https://your-server.com/ingest"
export OR_PROJECT_KEY="your-project-key"
./gradlew assembleDebugOption 3: Gradle Properties
Add to your ~/.gradle/gradle.properties:
OR_SERVER_URL=https://your-server.com/ingest
OR_PROJECT_KEY=your-project-keyℹ️ Note: If OR_PROJECT_KEY is not configured, the app will run normally but tracking will be disabled. A warning will be logged to help developers identify the missing configuration.
The project uses Gradle version catalogs and includes:
- Debug build: Includes
.debugsuffix and debug logging enabled - Release build: ProGuard enabled with R8 optimization
- Min SDK: 24 (Android 7.0)
- Target SDK: 34 (Android 14)
Key dependencies:
- Kotlin 2.0.0
- AndroidX Core KTX 1.13.1
- Gson 2.10.1
- Apache Commons Compress 1.26.1
- Jetpack Compose (for tracker UI)
./gradlew assembleDebug
./gradlew assembleReleaseThe app module contains a sample application demonstrating tracker integration:
- Session tracking - Automatic session recording
- User events - Custom events and metadata
- Input tracking - Automatic EditText field tracking
- GraphQL monitoring - Query and mutation tracking
- Network tracking - HTTP request/response capture
- Touch events - Click and swipe gesture recording
- Screenshot sanitization - Mask sensitive UI elements
- Analytics events - All mobile event types covered
Input tracking is automatic when analytics = true. All EditText fields are automatically tracked when an activity is displayed.
Features:
- ✅ Auto-discovery: Finds all EditText fields in the view hierarchy
- ✅ Smart labeling: Uses hint text, content description, or view ID
- ✅ Password detection: Automatically masks password input types
- ✅ Opt-out support: Exclude specific fields from tracking
Automatic Tracking:
// No code needed - EditText fields are automatically tracked!
// Password fields are automatically maskedExclude Specific Fields:
import com.openreplay.tracker.listeners.excludeFromTracking
// Opt-out of tracking for sensitive fields
binding.internalNotesField.excludeFromTracking()Manual Tracking (Optional):
import com.openreplay.tracker.listeners.trackTextInput
// Override auto-tracking with custom settings
binding.specialField.trackTextInput(label = "custom_label", masked = true)The tracker captures input when the user:
- Loses focus from the field
- Presses Done/Next/Send on the keyboard
The Home tab includes a live demo of screenshot masking:
import com.openreplay.tracker.listeners.sanitize
// Mask a field in screenshots (visual only)
binding.creditCardField.sanitize()- Regular Field: Visible in screenshots
- Sanitized Field: Masked with cross-stripes in screenshots
- Toggle Button: Switch sanitization on/off to see the difference
Known Limitation: Bottom sheets, dialogs, and floating windows are not captured in screenshots.
Why: Android's PixelCopy API captures only the activity's main window. Dialogs and bottom sheets create separate overlay windows (TYPE_APPLICATION) that exist outside the activity window hierarchy.
What IS Captured:
- ✅ Activity content (main UI)
- ✅ Fragments within the activity
- ✅ In-window overlays and popups
- ✅ Action bar and navigation bar
What IS NOT Captured:
- ❌ AlertDialog windows
- ❌ BottomSheetDialog windows
- ❌ Custom Dialog windows
- ❌ System dialogs (permissions, etc.)
Workaround - Full Interaction Tracking:
While dialog visuals aren't captured, all interactions ARE tracked:
// Dialog events are tracked
OpenReplay.event("dialog_opened", mapOf("type" to "login"))
OpenReplay.event("dialog_submitted", mapOf("action" to "confirm"))
// Input fields in dialogs are auto-tracked
// Button clicks are tracked
// All user interactions are loggedResult: You get complete behavioral analytics and interaction data, which is often more valuable than screenshots for understanding user actions.