Database Management and Maintenance
Monitor, maintain, document, and manage databases including performance optimization and data operations (25% of exam).
In the context of the CompTIA DataSys+ certification, Database Management and Maintenance is a critical domain that focuses on the ongoing operational health, stability, and efficiency of database systems after their initial deployment. This area encompasses a wide array of administrative tasks des…
Concepts covered: System alerts monitoring, Performance metrics tracking, Transaction log monitoring, Resource utilization monitoring, Database health checks, Query performance monitoring, Deadlock detection and resolution, Connection monitoring, Database reporting tools, Query optimization, Index management, Database patching, Capacity planning, Performance tuning, Statistics updates, Database defragmentation, Archiving and purging, Scheduled maintenance tasks, Database upgrades, Data dictionary maintenance, ERD documentation, Standard Operating Procedures (SOPs), Compliance documentation, Change management documentation, Database documentation best practices, Data modification operations, Managing data redundancy, Defining relationships, Views management, Materialized views, Data integrity constraints, Referential integrity, Data quality management
DataSys+ - Database Management and Maintenance Example Questions
Test your knowledge of Database Management and Maintenance
Question 1
A logistics company operates an Apache CouchDB 3.1.1 cluster with 18 nodes managing 4.3TB of shipment tracking documents across their global distribution network. The database team is planning an upgrade to CouchDB 3.3.2 to utilize enhanced query indexing and improved replication conflict resolution algorithms. During staging environment validation, the team discovers that the production system uses 56 design documents containing MapReduce views for real-time shipment status aggregations, and 28 of these views use the reduce function '_stats' which has undergone internal calculation changes affecting standard deviation computations by 0.8-3.2% variance. Additionally, testing reveals that 19 continuous replication feeds using filtered replication with complex JavaScript filter functions now experience 340% increase in replication lag, growing from 2.1 seconds to 9.2 seconds average delay. The application layer executes 6,800 document updates per minute during peak shipping hours, and 14 custom validation functions written in JavaScript for document integrity checking now trigger false positive rejections on 4.3% of legitimate updates due to stricter JSON schema enforcement. The production environment maintains SLA requirements of sub-4 second query response times for customer tracking portals, and the operations team has 23 scheduled Mango queries for route optimization that depend on consistent statistical aggregations. A major contract with an automotive manufacturer begins in 8 weeks, expected to increase document volume by 420%, and the data analytics team has built 17 predictive models based on current statistical outputs. The company can afford a maximum 5-hour maintenance window during off-peak hours. What should be the database administrator's primary technical strategy for ensuring operational continuity post-upgrade?
Question 2
A database administrator is managing a MySQL instance supporting a real-time bidding platform for an advertising exchange. Over the past eleven weeks, bid matching queries have degraded from 45ms to 2,800ms execution time. The server has 24 CPU cores and 128GB RAM, with monitoring showing 29% CPU utilization and 43% memory consumption during bid processing windows. Storage performance metrics indicate consistent 3ms read latency and 5ms write latency. The bid_requests table contains 312 million rows with an average row size of 1.9KB and processes 8,500 INSERT operations per minute alongside 12,000 SELECT operations per minute. Bid matching queries filter on advertiser_id, campaign_budget_remaining, target_demographics, and geographic_region columns, then sort by bid_amount descending to determine auction winners. Query execution plan analysis reveals that 89% of bid matching operations perform full table scans using the clustered index on bid_request_id. The table has a single non-clustered index on timestamp_created established during initial deployment three years ago. The InnoDB buffer pool is configured at 96GB with a hit ratio of 98.1%. Table statistics are updated via automated maintenance running every Sunday at 3 AM. Query cache is enabled with 1GB allocation showing 31% utilization. Binary logging operates in ROW format for replication to analytics systems. Slow query log analysis shows that bid matching queries account for 76% of total execution time. Connection pooling maintains 220 persistent connections with an average query execution rate of 15,000 queries per second during peak hours. Which performance tuning modification would most effectively restore optimal bid matching query execution times?
Question 3
A pharmaceutical research company operates a clinical_trials database managing patient participation data across 280 active studies. The trial_participants table contains 1.9 million records with columns: participant_id (primary key), study_id, enrollment_date, dosage_level, adverse_events_count, last_checkup_date, participation_status, and assigned_researcher_id. On September 8th at 7:45 AM, a data integration script executed a MERGE operation intended to synchronize dosage_level adjustments from a physician portal system. However, the matching logic used participant_last_name instead of participant_id, causing 18,700 participants to receive dosage_level values from different patients who shared the same last name. The error was detected on September 11th at 1:30 PM when a safety monitoring committee noticed statistically impossible dosage escalation patterns during a routine review. The affected records span 67 different clinical trials across oncology, cardiology, and neurology departments. The trial_participants table has foreign key constraints to study_protocols, medication_dispensing_log, adverse_events, and regulatory_submissions tables. A trigger automatically generates safety alerts in the monitoring_queue table when dosage_level changes exceed protocol-defined thresholds, and 4,300 safety alerts have been created and forwarded to principal investigators. The medication_dispensing_log shows that pharmacies have already prepared and labeled 2,800 medication doses based on the incorrect dosage_level values, though only 340 have been administered to patients. The adverse_events table contains 210 new event reports filed since the erroneous merge, making it unclear which events relate to incorrect dosing. The regulatory_submissions table references these dosage records in ongoing FDA compliance reports due September 20th. The database processes 1,800 participant record updates per hour during clinical operation hours (6 AM to 8 PM across time zones), and uses SERIALIZABLE isolation with table-level locking for merge operations. The institutional review board requires corrected data and a complete impact assessment before authorizing continuation of affected trials. What data modification strategy should the database administrator implement to restore accurate dosage_level assignments while managing the clinical safety and regulatory compliance complexities?