Database Fundamentals
Understand database structure types, develop SQL code, apply scripting methods, and analyze programming impact on database systems (24% of exam).
In the context of CompTIA DataSys+, understanding database fundamentals is the bedrock for managing and deploying data systems effectively. At its core, a database is an organized collection of structured information, or data, typically stored electronically in a computer system. The database manag…
Concepts covered: Consistency in databases, Database views, Subqueries and nested queries, Relational databases, Non-relational databases, NoSQL databases, Amazon DynamoDB, MongoDB, Document databases, Key-value stores, Graph databases, Column-family databases, Database comparison and selection, Data Definition Language (DDL), Data Manipulation Language (DML), Transaction Control Language (TCL), Data Control Language (DCL), ACID principles, Atomicity in transactions, Isolation levels, Durability in transactions, Stored procedures, Triggers and events, SQL functions, SQL joins, Server-side scripting, Client-side scripting, Python for database scripting, PowerShell for database automation, Command-line scripting, Bash scripting for databases, SQL scripting best practices, Object-Relational Mapping (ORM), Hibernate ORM, Entity Framework, SQL code validation, Database server impact analysis, Query optimization fundamentals, N+1 query problem, Connection pooling
DataSys+ - Database Fundamentals Example Questions
Test your knowledge of Database Fundamentals
Question 1
A nationwide mortgage lending institution operates a distributed SQL Server database that processes loan origination documents and funding disbursements. The architecture consists of a primary instance running on Windows Server 2022 with database files stored on a NetApp SAN. The storage administrator has configured the NetApp array with NVRAM-backed write caching and enabled the 'delayed deduplication' feature, which postpones deduplication processing by holding write operations in controller cache memory for up to 25 seconds while analyzing block patterns across multiple volume writes before committing unique blocks to the disk shelves. The database configuration shows DELAYED_DURABILITY = FORCED to improve transaction throughput during high-volume loan processing periods. During the morning hours of October 12th, the system processed 3,200 loan disbursements between 09:00 and 09:45. At 09:45:18, a storage controller experienced a firmware fault that caused an immediate reboot, and the NVRAM contents were successfully flushed to persistent storage during the controlled shutdown sequence. However, post-incident analysis revealed that 73 loan disbursements that were committed and acknowledged to loan officers between 09:44:52 and 09:45:15 (with borrowers receiving wire transfer notifications) are completely missing from the recovered database, while 3,127 other disbursements from the same 45-minute window persisted successfully. The SQL Server error log confirms that all 73 missing transactions completed successfully and returned commit confirmations to the application tier before the storage controller fault occurred. Database forensics show that these 73 transactions were logged to the transaction log file, but the log records indicate delayed durability markers. Which technical modification would provide the MOST reliable solution to ensure all acknowledged loan disbursements achieve guaranteed persistence through unexpected storage subsystem failures?
Question 2
A stock brokerage firm operates an Oracle database that records equity trade executions for institutional clients. The database server runs on AIX with SAN-attached storage using Fibre Channel connectivity. The DBA has configured COMMIT_LOGGING=NORMAL and the storage team reports that the SAN cache is protected by supercapacitors rated for 60 seconds of flush time during power loss. The application tier uses connection pooling with 200 persistent connections. During peak trading hours (09:30-10:00 AM), the system processes 18,000 trade executions. On November 8th, a Fibre Channel switch experienced a firmware deadlock at 09:47:22, causing all I/O operations to hang for 12 seconds before the switch self-recovered at 09:47:34. The database processes remained active throughout, and applications continued submitting trades and receiving commit confirmations. Post-incident analysis revealed that 94 trade executions committed between 09:47:19 and 09:47:33 (which had been confirmed to traders and reported to market regulators via real-time feeds) are missing from the recovered database, while 289 other trades from the same 14-second window persisted successfully. The Oracle alert log shows no errors during the incident window, and the redo logs contain sequential entries with no gaps. Investigation shows that Oracle's log writer process received fsync acknowledgments from the operating system for all 94 missing transactions before the switch deadlock occurred. Further analysis reveals that AIX's journaled file system (JFS2) was mounted with the 'concurrent I/O' option, which bypasses the operating system buffer cache and allows the storage subsystem to acknowledge writes as soon as they reach the SAN cache, before actual disk platter writes complete. The SAN cache policy was configured for 'write-back with forced flush every 15 seconds'. Which configuration modification would provide the MOST comprehensive durability assurance for all acknowledged trade commits during transient storage interconnect failures?
Question 3
Which DCL command is specifically designed to assign permissions on database objects to users or roles?