Dashboard Overview
The tapioka.ai dashboard is your mission control for AI-driven quality assurance. It provides a comprehensive view of your projects, testing infrastructure, and execution metrics.
Project Management
Upon logging in, you are presented with the Project List. Each project is represented as a card showing:
- Platform & Name: Choose from supported platform and give your project a name. Platform cannot be changed later. Currently, projects are platform-specific - if you need to test multiple platforms (e.g., Android and iOS), you must create a separate project for each.
- Key Metrics: Number of Test Suites and Test Cases within that project.
- Quick Links: Fast access to the project's test library or recent runs.
Core Navigation
The sidebar provides deep-link access to the platform's three pillars. For each project, you have those main menu items:
- Tests: This is where you create your test cases and suites, and also schedule them to run.
- Runs: This is where you can see all past and present test executions.
- Devices: Management of available virtual and physical device fleet.
Test structure
- The Project is the highest level of the hierarchy. It encompasses the entire codebase, configuration, and environment settings required to run your tests.
- A Test Suite is a logical collection of test cases grouped together. Suites are usually organized by feature, module, or priority (e.g., "Smoke Tests" or "Billing Module").
- The Test Case is the fundamental unit. It validates a specific behavior or a single "path" through the application. Each test case must belong to exactly one Project and one Test Suite.
- Pre-conditions: Setting the specific state needed for the test (e.g., logging in).
- The Scenario: The actual interaction, such as clicking a button or navigating to the given screen.
- The Expected Result: The "moment of truth" where the expected result is compared against the actual result.
Test Library Interface
Inside a project, tests are organized into Suites (e.g., "Smoke Tests", "Regression Pack"). The interface provides clear status indicators:
- Learning Status: Shows the percentage of the test scenario that the AI has successfully "learned".
- Execution States: Visual labels for
Passed,Failed,Active, orScheduledruns. - Device Assignments: Displays which hardware an execution is targeting (e.g., Samsung S23).