Snowflake SnowPro Advanced: Administrator Certification ADA-C01 Exam Practice Test

Page: 1 / 14
Total 78 questions
Question 1

An Administrator has a user who needs to be able to suspend and resume a task based on the current virtual warehouse load, but this user should not be able to modify the task or start a new run.

What privileges should be granted to the user to meet these requirements? (Select TWO).



Question 2

What are characteristics of Dynamic Data Masking? (Select TWO).



Answer : B, E

According to the Using Dynamic Data Masking documentation, Dynamic Data Masking is a feature that allows you to alter sections of data in table and view columns at query time using a predefined masking strategy. The following are some of the characteristics of Dynamic Data Masking:

* A single masking policy can be applied to columns in different tables. This means that you can write a policy once and have it apply to thousands of columns across databases and schemas.

* A single masking policy can be applied to columns with different data types. This means that you can use the same masking strategy for columns that store different kinds of data, such as strings, numbers, dates, etc.

* A masking policy that is currently set on a table can be dropped. This means that you can remove the masking policy from the table and restore the original data visibility.

* A masking policy can be applied to the VALUE column of an external table. This means that you can mask data that is stored in an external stage and queried through an external table.

* The role that creates the masking policy will always see unmasked data in query results. This is not true, as the masking policy can also apply to the creator role depending on the execution context conditions defined in the policy. For example, if the policy specifies that only users with a certain custom entitlement can see the unmasked data, then the creator role will also need to have that entitlement to see the unmasked data.


Question 3

When adding secure views to a share in Snowflake, which function is needed to authorize users from another account to access rows in a base table?



Answer : C

According to the Working with Secure Views documentation, secure views are designed to limit access to sensitive data that should not be exposed to all users of the underlying table(s). When sharing secure views with another account, the view definition must include a function that returns the identity of the user who is querying the view, such as CURRENT_USER, CURRENT_ROLE, or CURRENT_ACCOUNT. These functions can be used to filter the rows in the base table based on the user's identity. For example, a secure view can use the CURRENT_USER function to compare the user name with a column in the base table that contains the authorized user names. Only the rows that match the user name will be returned by the view. The CURRENT_CLIENT function is not suitable for this purpose, because it returns the IP address of the client that is connected to Snowflake, which is not related to the user's identity.


Question 4

A Snowflake account is configured with SCIM provisioning for user accounts and has bi-directional synchronization for user identities. An Administrator with access to SECURITYADMIN uses the Snowflake UI to create a user by issuing the following commands:

use role USERADMIN;

create or replace role DEVELOPER_ROLE;

create user PTORRES PASSWORD = 'hello world!' MUST_CHANGE_PASSWORD = FALSE

default_role = DEVELOPER_ROLE;

The new user named PTORRES successfully logs in, but sees a default role of PUBLIC in the web UI. When attempted, the following command fails:

use DEVELOPER_ROLE;

Why does this command fail?



Answer : C

According to the Snowflake documentation1, creating a user with a default role does not automatically grant that role to the user. The user must be explicitly granted the role by the role owner or a higher-level role. Therefore, the USERADMIN role, which created the DEVELOPER_ROLE, needs to explicitly grant the DEVELOPER_ROLE to the new user PTORRES using the GRANT ROLE command. Otherwise, the user PTORRES will not be able to use the DEVELOPER_ROLE and will see the default role of PUBLIC in the web UI. Option A is incorrect because the DEVELOPER_ROLE does not need to be granted to SYSADMIN before user PTORRES can use the role. Option B is incorrect because the new role can take effect immediately after it is created and granted to the user, and does not depend on the USERADMIN role logging out. Option D is incorrect because the new role will not be affected by the identity provider synchronization, as it is created and managed in Snowflake.


Question 5

What Snowflake capabilities are commonly used in rollback scenarios? (Select TWO).



Answer : B, D

Scenario: You want to rollback changes due to a problematic query (e.g., accidental data modification or corruption). Snowflake provides two powerful tools:

B. CLONE ... BEFORE (STATEMENT => 'query_id')

This uses Time Travel + Zero-Copy Cloning.

You can clone a table as it existed before a specific query.

It creates a full copy of the table's state at that moment without duplicating storage.

Example:

CREATE TABLE prd_table_bkp CLONE prd_table

BEFORE (STATEMENT => '01a2b3c4-0000-0000-0000-123456789abc');

D. ALTER TABLE ... SWAP WITH ...

Once you've cloned the backup, you can swap it with the live table.

This is a fast, atomic operation --- ideal for rollback.

Example:

ALTER TABLE prd_table SWAP WITH prd_table_bkp;

Why the Other Options Are Incorrect:

A . SELECT SYSTEM$CANCEL_QUERY(...)

Cancels a currently running query --- doesn't help if the query already executed and caused damage.

C . CREATE TABLE ... AS SELECT * FROM RESULT_SCAN(...)

Reconstructs results, not the original table.

Only captures output rows, not full table state.

Not ideal for rollback.

E . Contact Snowflake Support to retrieve Fail-safe data

Fail-safe is for disaster recovery only, and only accessible by Snowflake support.

It's not intended for routine rollback or recovery and has a 7-day fixed retention (non-configurable).

SnowPro Administrator Reference:

Zero-Copy Cloning with Time Travel

ALTER TABLE SWAP

System Functions -- SYSTEM$CANCEL_QUERY

Fail-safe Overview


Question 6

A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous

loT data. Time Travel is being used as a component of the company's data quality process in which the ingestion pipeline should revert to a known quality data state if any

anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.

According to best practices, how should these requirements be met? (Select TWO).



Answer : B, E

According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days. To meet the requirements of the scenario, the following best practices should be followed:

* The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.

* The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.

The other options are incorrect because:

* Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.

* The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.

* Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.


Question 7

MY_TABLE is a table that has not been updated or modified for several days. On 01 January 2021 at 07:01, a user executed a query to update this table. The query ID is

'8e5d0ca9-005e-44e6-b858-a8f5b37c5726'. It is now 07:30 on the same day.

Which queries will allow the user to view the historical data that was in the table before this query was executed? (Select THREE).



Answer : B, D, F

According to the AT | BEFORE documentation, the AT or BEFORE clause is used for Snowflake Time Travel, which allows you to query historical data from a table based on a specific point in the past. The clause can use one of the following parameters to pinpoint the exact historical data you wish to access:

* TIMESTAMP: Specifies an exact date and time to use for Time Travel.

* OFFSET: Specifies the difference in seconds from the current time to use for Time Travel.

* STATEMENT: Specifies the query ID of a statement to use as the reference point for Time Travel.

Therefore, the queries that will allow the user to view the historical data that was in the table before the query was executed are:

* B. SELECT * FROM my_table AT (TIMESTAMP => '2021-01-01 07:00:00' :: timestamp); This query uses the TIMESTAMP parameter to specify a point in time that is before the query execution time of 07:01.

* D. SELECT * FROM my table PRIOR TO STATEMENT '8e5d0ca9-005e-44e6-b858-a8f5b37c5726'; This query uses the PRIOR TO STATEMENT keyword and the STATEMENT parameter to specify a point in time that is immediately preceding the query execution time of 07:01.

* F. SELECT * FROM my_table BEFORE (STATEMENT => '8e5d0ca9-005e-44e6-b858-a8f5b37c5726'); This query uses the BEFORE keyword and the STATEMENT parameter to specify a point in time that is immediately preceding the query execution time of 07:01.

The other queries are incorrect because:

* A. SELECT * FROM my table WITH TIME_TRAVEL (OFFSET => -60*30); This query uses the OFFSET parameter to specify a point in time that is 30 minutes before the current time, which is 07:30. This is after the query execution time of 07:01, so it will not show the historical data before the query was executed.

* C. SELECT * FROM TIME_TRAVEL ('MY_TABLE', 2021-01-01 07:00:00); This query is not valid syntax for Time Travel. The TIME_TRAVEL function does not exist in Snowflake. The correct syntax is to use the AT or BEFORE clause after the table name in the FROM clause.

* E. SELECT * FROM my_table AT (OFFSET => -60*30); This query uses the AT keyword and the OFFSET parameter to specify a point in time that is 30 minutes before the current time, which is 07:30. This is equal to the query execution time of 07:01, so it will not show the historical data before the query was executed. The AT keyword specifies that the request is inclusive of any changes made by a statement or transaction with timestamp equal to the specified parameter. To exclude the changes made by the query, the BEFORE keyword should be used instead.


Page:    1 / 14   
Total 78 questions