Before below options what you need for executing INSERT command in Snowflake worksheet?
Answer : D
Executing anINSERTcommand in a Snowflake worksheet requires three components to be explicitly selected:
Active Warehouse-- Provides compute resources to process the query. Without an active warehouse, DML operations cannot execute.
Database-- The table receiving the INSERT must reside in the active database context.
Schema-- The targeted table must be within a selected schema.
All three define the fully qualified context for locating and writing to the correct object. Snowflake's compute/storage separation ensures compute is only used when warehouses are active. Schema and database selection ensure correct namespace resolution.
What are Snowflake customers responsible for?
Answer : D
As a fully managed cloud data platform, Snowflake is responsible for infrastructure provisioning, hardware, software installation, platform upgrades, scaling, and internal metadata management such as micro-partitions and statistics. Customers do not manage physical hardware or install Snowflake software.
Customers are responsible for their data and its lifecycle within Snowflake. This includes loading data into tables from internal and external sources, unloading data when required, organizing data structures (databases, schemas, tables), defining access controls, and managing how data is used, transformed, and governed. They design schemas and workloads but do not manage the underlying engine. Therefore, ''Loading, unloading, and managing data'' correctly describes the customer's responsibility.
==================
What is a fully qualified name in Snowflake used for?
Answer : D
A fully qualified name uniquely identifies Snowflake objects by specifyingdatabase.schema.object. This prevents ambiguity when multiple schemas or databases contain objects with identical names. Fully qualified names ensure that SQL statements operate on the intended object. They are not used for storing data, managing permissions, or configuring network settings. Their core purpose is precise object identification.
What file extension is commonly used for Snowflake notebooks?
Answer : C
Snowflake notebooks use the.ipynbfile extension, the standard format for Jupyter notebooks. This format stores executable code, markdown, metadata, and cell outputs in a structured JSON layout. Snowflake adopts this format to ensure compatibility with the broader Python ecosystem, thereby enabling seamless migration between Snowflake and external notebook environments.
The .ipynb structure allows mixed SQL and Python cells, visualizations, Streamlit components, documentation, and stepwise development within Snowsight. It supports reproducibility, collaboration, and integration with Snowpark and Cortex.
Incorrect formats:
.ipnbis a misspelling and invalid.
.sqlis used for SQL scripts only.
.txtcannot represent notebook metadata or cell structure.
Thus, .ipynb is the correct and only supported notebook format.
Which role is a system defined role in Snowflake?
Answer : A
USERADMIN is one of Snowflake'ssystem-defined roles, created automatically in every account. It is responsible for managing users and roles, including CREATE USER, ALTER USER, and role assignment. It is part of Snowflake's default RBAC hierarchy (SYSADMIN, SECURITYADMIN, USERADMIN, etc.).
SNOWFLAKE_ADMIN and SNOWFLAKE_DBA are not Snowflake system roles---they may exist in organizations as custom roles but do not appear by default. DATA_ENGINEER is also user-created and not a built-in role.
Therefore, USERADMIN is the only true system-defined role listed.
==================
What Snowflake parameter is configured in the Query Processing layer?
Answer : C
The Query Processing layer of Snowflake is wherevirtual warehouses operate, so warehouse sizing parameters (X-Small to 6X-Large) fall under this layer. Warehouse size determines compute power, concurrency, and performance behavior for SQL workloads. Administrators configure warehouse size based on workload intensity, response time requirements, and cost considerations.
Serverless compute limits and micro-partition limits belong to storage and services layers. Table types (permanent, transient, temporary) are storage-level configurations, not part of Query Processing.
Thus, warehouse sizing is the correct parameter configured at the Query Processing layer.
==================
How do you drop a schema named "temp_schema" in Snowflake?
Answer : A
The correct SQL command for removing a schema in Snowflake is:
DROP SCHEMA temp_schema;
This command deletes the schema and all objects contained within it, including tables, views, stages, file formats, and sequences. Snowflake performs this operation atomically, ensuring metadata consistency during the drop process. Users can also includeIF EXISTSor theCASCADEkeyword to handle dependencies more explicitly:
DROP SCHEMA IF EXISTS temp_schema CASCADE;
This safely handles scenarios where the schema may not exist or contains objects that would normally block deletion.
Incorrect options:
DROP DATABASE temp_schemaremoves an entire database, not a schema.
DELETE SCHEMAis not valid SQL; SQL uses DROP for schema removal.
DROP VIEW temp_schemaapplies only to removing a view object.
Dropping a schema requires USAGE and OWNERSHIP privileges, typically granted to roles such as SYSADMIN or ACCOUNTADMIN.