Snowflake SnowPro Advanced: Administrator Certification ADA-C01 Exam Practice Test

Page: 1 / 14
Total 78 questions
Question 1

What Snowflake capabilities are commonly used in rollback scenarios? (Select TWO).



Answer : B, D

Scenario: You want to rollback changes due to a problematic query (e.g., accidental data modification or corruption). Snowflake provides two powerful tools:

B. CLONE ... BEFORE (STATEMENT => 'query_id')

This uses Time Travel + Zero-Copy Cloning.

You can clone a table as it existed before a specific query.

It creates a full copy of the table's state at that moment without duplicating storage.

Example:

CREATE TABLE prd_table_bkp CLONE prd_table

BEFORE (STATEMENT => '01a2b3c4-0000-0000-0000-123456789abc');

D. ALTER TABLE ... SWAP WITH ...

Once you've cloned the backup, you can swap it with the live table.

This is a fast, atomic operation --- ideal for rollback.

Example:

ALTER TABLE prd_table SWAP WITH prd_table_bkp;

Why the Other Options Are Incorrect:

A . SELECT SYSTEM$CANCEL_QUERY(...)

Cancels a currently running query --- doesn't help if the query already executed and caused damage.

C . CREATE TABLE ... AS SELECT * FROM RESULT_SCAN(...)

Reconstructs results, not the original table.

Only captures output rows, not full table state.

Not ideal for rollback.

E . Contact Snowflake Support to retrieve Fail-safe data

Fail-safe is for disaster recovery only, and only accessible by Snowflake support.

It's not intended for routine rollback or recovery and has a 7-day fixed retention (non-configurable).

SnowPro Administrator Reference:

Zero-Copy Cloning with Time Travel

ALTER TABLE SWAP

System Functions -- SYSTEM$CANCEL_QUERY

Fail-safe Overview


Question 2

An Administrator loads data into a staging table every day. Once loaded, users from several different departments perform transformations on the data and load it into

different production tables.

How should the staging table be created and used to MINIMIZE storage costs and MAXIMIZE performance?



Answer : B

According to the Snowflake documentation1, a transient table is a type of table that does not support Time Travel or Fail-safe, which means that it does not incur any storage costs for maintaining historical versions of the data or backups for disaster recovery. A transient table can be dropped at any time, and the data is not recoverable. A transient table can also have a retention time of 0 days, which means that the data is deleted immediately after the table is dropped or truncated. Therefore, creating the staging table as a transient table with a retention time of 0 days can minimize the storage costs and maximize the performance, as the data is only loaded and transformed once, and then deleted after the production tables are populated. Option A is incorrect because creating the staging table as an external table, which references data files stored in a cloud storage location, can incur additional costs and complexity for data transfer and synchronization, and may not provide the best performance for data loading and transformation. Option C is incorrect because creating the staging table as a temporary table, which is automatically dropped when the session ends or the user logs out, can cause data loss or inconsistency if the session is interrupted or terminated before the production tables are populated. Option D is incorrect because creating the staging table as a permanent table, which supports Time Travel and Fail-safe, can incur additional storage costs for maintaining historical versions of the data and backups for disaster recovery, and may not provide the best performance for data loading and transformation.


Question 3

The following SQL command was executed:

Use role SECURITYADMIN;

Grant ownership

On future tables

In schema PROD. WORKING

To role PROD_WORKING_OWNER;

Grant role PROD_WORKING_OWNER to role SYSADMIN;

Use role ACCOUNTADMIN;

Create table PROD.WORKING.XYZ (value number) ;

Which role(s) can alter or drop table XYZ?



Answer : C

According to the GRANT OWNERSHIP documentation, the ownership privilege grants full control over the table and can only be held by one role at a time. However, the current owner can also grant the ownership privilege to another role, which transfers the ownership to the new role. In this case, the SECURITYADMIN role granted the ownership privilege on future tables in the PROD.WORKING schema to the PROD_WORKING_OWNER role. This means that any table created in that schema after the grant statement will be owned by the PROD_WORKING_OWNER role. Therefore, the PROD_WORKING_OWNER role can alter or drop table XYZ, which was created by the ACCOUNTADMIN role in the PROD.WORKING schema. Additionally, the ACCOUNTADMIN role can also alter or drop table XYZ, because it is the top-level role that has all privileges on all objects in the account. Furthermore, the SYSADMIN role can also alter or drop table XYZ, because it was granted the PROD_WORKING_OWNER role by the SECURITYADMIN role. The SYSADMIN role can activate the PROD_WORKING_OWNER role and inherit its privileges, including the ownership privilege on table XYZ. The SECURITYADMIN role cannot alter or drop table XYZ, because it does not have the ownership privilege on the table, nor does it have the PROD_WORKING_OWNER role.


Question 4

An Administrator has a user who needs to be able to suspend and resume a task based on the current virtual warehouse load, but this user should not be able to modify the task or start a new run.

What privileges should be granted to the user to meet these requirements? (Select TWO).



Question 5

A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous

loT data. Time Travel is being used as a component of the company's data quality process in which the ingestion pipeline should revert to a known quality data state if any

anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.

According to best practices, how should these requirements be met? (Select TWO).



Answer : B, E

According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days. To meet the requirements of the scenario, the following best practices should be followed:

* The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.

* The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.

The other options are incorrect because:

* Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.

* The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.

* Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.


Question 6

A Snowflake Administrator needs to persist all virtual warehouse configurations for auditing and backups. Given a table already exists with the following schema:

Table Name : VWH_META

Column 1 : SNAPSHOT_TIME TIMESTAMP_NTZ

Column 2 : CONFIG VARIANT

Which commands should be executed to persist the warehouse data at the time of execution in JSON format in the table VWH META?



Answer : C

According to the Using Persisted Query Results documentation, the RESULT_SCAN function allows you to query the result set of a previous command as if it were a table. The LAST_QUERY_ID function returns the query ID of the most recent statement executed in the current session. Therefore, the combination of these two functions can be used to access the output of the SHOW WAREHOUSES command, which returns the configurations of all the virtual warehouses in the account. However, to persist the warehouse data in JSON format in the table VWH_META, the OBJECT_CONSTRUCT function is needed to convert the output of the SHOW WAREHOUSES command into a VARIANT column. The OBJECT_CONSTRUCT function takes a list of key-value pairs and returns a single JSON object. Therefore, the correct commands to execute are:

1. SHOW WAREHOUSES;

2. INSERT INTO VWH_META SELECT CURRENT_TIMESTAMP (), OBJECT_CONSTRUCT (*) FROM TABLE (RESULT_SCAN (LAST_QUERY_ID ()));

The other options are incorrect because:

* A. This option does not use the OBJECT_CONSTRUCT function, so it will not persist the warehouse data in JSON format. Also, it is missing the * symbol in the SELECT clause, so it will not select any columns from the result set of the SHOW WAREHOUSES command.

* B. This option does not use the OBJECT_CONSTRUCT function, so it will not persist the warehouse data in JSON format. It will also try to insert multiple columns into a single VARIANT column, which will cause a type mismatch error.

* D. This option does not use the OBJECT_CONSTRUCT function, so it will not persist the warehouse data in JSON format. It will also try to use the RESULT_SCAN function on a subquery, which is not supported. The RESULT_SCAN function can only be used on a query ID or a table name.


Question 7

Which function is the role SECURITYADMIN responsible for that is not granted to role USERADMIN?



Answer : B

According to the Snowflake documentation1, the SECURITYADMIN role is responsible for managing all grants on objects in the account, including system grants. The USERADMIN role can only create and manage users and roles, but not grant privileges on other objects. Therefore, the function that is unique to the SECURITYADMIN role is to manage system grants. Option A is incorrect because both roles can reset a user's password. Option C is incorrect because both roles can create new users. Option D is incorrect because both roles can create new roles.


Page:    1 / 14   
Total 78 questions