DEV Community

varma36a
varma36a

Posted on • Updated on

MSSQL Interview Questions

Triggers

Certainly! Here's an example of a SQL Server trigger:

Let's assume we have two tables: Orders and OrderHistory. Whenever a new order is inserted into the Orders table, we want to automatically insert a corresponding record into the OrderHistory table.

Here's how you can achieve this using a trigger:

-- Creating the Orders table
CREATE TABLE Orders (
    OrderID INT PRIMARY KEY,
    OrderDate DATE,
    CustomerName VARCHAR(100),
    TotalAmount DECIMAL(10, 2)
);

-- Creating the OrderHistory table
CREATE TABLE OrderHistory (
    OrderID INT,
    OrderDate DATE,
    CustomerName VARCHAR(100),
    TotalAmount DECIMAL(10, 2),
    CreatedAt DATETIME
);

-- Creating the trigger
CREATE TRIGGER trg_OrderHistory
ON Orders
AFTER INSERT
AS
BEGIN
    INSERT INTO OrderHistory (OrderID, OrderDate, CustomerName, TotalAmount, CreatedAt)
    SELECT OrderID, OrderDate, CustomerName, TotalAmount, GETDATE()
    FROM inserted;
END;
Enter fullscreen mode Exit fullscreen mode

In this example, we create two tables: Orders and OrderHistory. The Orders table represents the current orders, and the OrderHistory table is used to store historical data.

Cursor

-- Creating a temporary table to hold the results
CREATE TABLE #EmployeeData (
    EmployeeID INT,
    EmployeeName VARCHAR(100),
    Salary DECIMAL(10, 2)
);

-- Declare the cursor
DECLARE @EmployeeID INT;
DECLARE @EmployeeName VARCHAR(100);
DECLARE @Salary DECIMAL(10, 2);

DECLARE EmployeeCursor CURSOR FOR
SELECT EmployeeID, EmployeeName, Salary
FROM Employees;

-- Open the cursor
OPEN EmployeeCursor;

-- Fetch the first row
FETCH NEXT FROM EmployeeCursor INTO @EmployeeID, @EmployeeName, @Salary;

-- Process each row using a loop
WHILE @@FETCH_STATUS = 0
BEGIN
    -- Perform some operations on the current row
    INSERT INTO #EmployeeData (EmployeeID, EmployeeName, Salary)
    VALUES (@EmployeeID, @EmployeeName, @Salary);

    -- Fetch the next row
    FETCH NEXT FROM EmployeeCursor INTO @EmployeeID, @EmployeeName, @Salary;
END;

-- Close and deallocate the cursor
CLOSE EmployeeCursor;
DEALLOCATE EmployeeCursor;

-- Select the results from the temporary table
SELECT * FROM #EmployeeData;

-- Clean up
DROP TABLE #EmployeeData;
Enter fullscreen mode Exit fullscreen mode

Views

-- Creating the Employees table
CREATE TABLE Employees (
    EmployeeID INT PRIMARY KEY,
    FirstName VARCHAR(50),
    LastName VARCHAR(50),
    Salary DECIMAL(10, 2),
    Department VARCHAR(50)
);

-- Inserting sample data into the Employees table
INSERT INTO Employees (EmployeeID, FirstName, LastName, Salary, Department)
VALUES (1, 'John', 'Doe', 5000.00, 'Sales'),
       (2, 'Jane', 'Smith', 6000.00, 'Marketing'),
       (3, 'Mike', 'Johnson', 5500.00, 'Sales'),
       (4, 'Emily', 'Davis', 7000.00, 'HR'),
       (5, 'Alex', 'Wilson', 4500.00, 'Sales');

-- Creating a view
CREATE VIEW SalesEmployees
AS
SELECT EmployeeID, FirstName, LastName, Salary
FROM Employees
WHERE Department = 'Sales';

Enter fullscreen mode Exit fullscreen mode

Transactions

-- Begin the transaction explicitly
BEGIN TRAN;

-- Perform multiple operations within the transaction
INSERT INTO Customers (CustomerID, CustomerName) VALUES (1, 'John Doe');

-- Save the transaction at a specific point
SAVE TRAN Savepoint1;

-- Perform additional operations
UPDATE Orders SET OrderStatus = 'Shipped' WHERE OrderID = 12345;

-- Rollback to the savepoint
ROLLBACK TRAN Savepoint1;

-- Commit the transaction explicitly
COMMIT TRAN;
Enter fullscreen mode Exit fullscreen mode

transaction consistency

In SQL Server, transaction consistency levels determine the isolation and visibility rules for concurrent transactions. SQL Server provides several isolation levels to control how transactions interact with each other and how changes made by one transaction are visible to other transactions. The available transaction consistency levels in SQL Server are:

`1. Read Uncommitted (Level 0):
This is the lowest isolation level where transactions can read uncommitted data from other transactions. This level allows dirty reads, meaning a transaction can read data that has been modified but not yet committed by another transaction. It provides the least level of consistency but offers maximum concurrency.

  1. Read Committed (Level 1):
    In this isolation level, transactions can only read committed data. It prevents dirty reads but allows non-repeatable reads and phantom reads. Non-repeatable reads can occur when a transaction reads the same row multiple times, but the data changes between reads. Phantom reads occur when a transaction retrieves a set of rows multiple times, but the data changes between retrievals.

  2. Repeatable Read (Level 2):
    In this isolation level, a transaction ensures that within the transaction, any data it reads remains unchanged. It prevents dirty reads and non-repeatable reads but allows phantom reads. Other transactions can modify the data that has been read by the current transaction, but the current transaction will still see the original values.

  3. Serializable (Level 3):
    This is the highest isolation level that provides full isolation between transactions. It ensures that concurrent transactions do not have any impact on each other. It prevents dirty reads, non-repeatable reads, and phantom reads. It achieves this by acquiring locks on the accessed data, which can result in decreased concurrency.

  4. Snapshot Isolation:
    Snapshot isolation is a higher isolation level introduced in SQL Server. It provides a consistent snapshot of the data as it existed at the start of the transaction. It allows read consistency, meaning a transaction will not see any changes made by other transactions during its execution. It uses row versioning to achieve this isolation level.`

To specify the isolation level for a transaction, you can use the SET TRANSACTION ISOLATION LEVEL statement:

SET TRANSACTION ISOLATION LEVEL <isolation_level>;
Enter fullscreen mode Exit fullscreen mode

For example, to set the isolation level to Read Committed, you can use:

SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
Enter fullscreen mode Exit fullscreen mode

It's important to select the appropriate isolation level based on the requirements of your application to balance concurrency and consistency. Higher isolation levels offer stronger consistency but may reduce concurrency, while lower isolation levels offer higher concurrency but may result in less consistency.

2nd Highest Salary (Top 50 SQL Interview Questions)

select * from employee where salary=(select Max(salary) from employee);

select *from employee 
group by salary 
order by  salary desc limit 1,1;

SELECT name, MAX(salary) AS salary 
FROM employee 
WHERE salary IN
(SELECT salary FROM employee MINUS SELECT MAX(salary) 
FROM employee); 


WITH T ASd
(
SELECT *
   DENSE_RANK() OVER (ORDER BY Salary Desc) AS Rnk
FROM Employees
)
SELECT Name
FROM T
WHERE Rnk=2;
Enter fullscreen mode Exit fullscreen mode

Max Salary Department Wise


SELECT DeptID, MAX(Salary) FROM EmpDetails GROUP BY DeptID




SELECT DeptID, EmpName, Salary FROM EmpDetails WHERE (DeptID,Salary) IN (SELECT DeptID, MAX(Salary) FROM EmpDetails GROUP BY DeptID)
Enter fullscreen mode Exit fullscreen mode

Display ALternate Records

To display alternate records in SQL Server (MSSQL), you can use various methods depending on the specific requirements of your query. Here are a few approaches:

  1. Using ROW_NUMBER(): You can use the ROW_NUMBER() function to assign a sequential number to each record in the result set. Then, you can filter the records based on whether the row number is even or odd.
   SELECT *
   FROM (
       SELECT *, ROW_NUMBER() OVER (ORDER BY [column]) AS RowNum
       FROM [table]
   ) AS subquery
   WHERE RowNum % 2 = 1;
Enter fullscreen mode Exit fullscreen mode

This query assigns a row number to each record and filters only the rows where the row number is odd.

  1. Using a CTE (Common Table Expression): Another approach is to use a CTE to generate row numbers and then filter the rows based on even or odd row numbers.
   WITH CTE AS (
       SELECT *, ROW_NUMBER() OVER (ORDER BY [column]) AS RowNum
       FROM [table]
   )
   SELECT *
   FROM CTE
   WHERE RowNum % 2 = 1;
Enter fullscreen mode Exit fullscreen mode

This query creates a CTE that generates row numbers and filters the rows where the row number is odd.

  1. Using OFFSET and FETCH: If you are using SQL Server 2012 or later versions, you can utilize the OFFSET and FETCH clauses to achieve the desired result.
   SELECT *
   FROM [table]
   ORDER BY [column]
   OFFSET 0 ROWS
   FETCH NEXT 1 ROWS ONLY
   OFFSET 1 ROWS
   FETCH NEXT 1 ROWS ONLY
   OFFSET 2 ROWS
   FETCH NEXT 1 ROWS ONLY
   -- Continue the pattern as needed
Enter fullscreen mode Exit fullscreen mode

This approach uses the OFFSET clause to skip a specified number of rows and the FETCH clause to retrieve a specified number of rows. By adjusting the offset value, you can retrieve alternate records.

These are just a few examples of how you can display alternate records in SQL Server. The specific method you choose depends on your query requirements and the version of SQL Server you are using.

##Display Duplicate of a Column

SELECT [column], COUNT(*) AS Count
FROM [table]
GROUP BY [column]
HAVING COUNT(*) > 1;
Enter fullscreen mode Exit fullscreen mode

Display nth Row

SELECT *
FROM (
    SELECT *, ROW_NUMBER() OVER (ORDER BY some_column) AS rn
    FROM your_table
) AS subquery
WHERE rn = n;
Enter fullscreen mode Exit fullscreen mode

Display 1st or last n rows

To display the first or last n rows in SQL Server, you can use the TOP clause in combination with an ORDER BY clause to specify the desired order. Here are examples for retrieving the first or last n rows:

  1. Displaying the First n Rows:
SELECT TOP n *
FROM your_table
ORDER BY some_column;
Enter fullscreen mode Exit fullscreen mode

Replace your_table with the name of your table, n with the number of rows you want to retrieve, and some_column with the column you want to order by.

For instance, to retrieve the first 5 rows from a table named employees ordered by the employee_id, you would use:

SELECT TOP 5 *
FROM employees
ORDER BY employee_id;
Enter fullscreen mode Exit fullscreen mode
  1. Displaying the Last n Rows:
SELECT TOP n *
FROM your_table
ORDER BY some_column DESC;
Enter fullscreen mode Exit fullscreen mode

Here, the ORDER BY clause is used with the DESC keyword to sort the rows in descending order.

For example, to retrieve the last 3 rows from a table named customers ordered by the registration_date, you would use:

SELECT TOP 3 *
FROM customers
ORDER BY registration_date DESC;
Enter fullscreen mode Exit fullscreen mode

what are indexes

The B+ tree data structure is widely used in database systems and file systems to provide efficient access to large amounts of data. It is designed to maintain sorted data and allows for efficient insertion, deletion, and search operations. The time complexity of various operations on a B+ tree depends on the tree's height and branching factor. Here are the time complexities for common operations on a B+ tree:

  1. Search (Find):

    • Average Case: O(logN)
    • Worst Case: O(logN)
  2. Insertion:

    • Average Case: O(logN)
    • Worst Case: O(logN)
  3. Deletion:

    • Average Case: O(logN)
    • Worst Case: O(logN)
  4. Range Queries (Range Search):

    • Average Case: O(logN + K), where K is the number of retrieved elements
    • Worst Case: O(logN + K)
  5. Splitting a Node (Node Split):

    • O(B), where B is the maximum number of children a node can have (branching factor)
  6. Merging Nodes (Node Merge):

    • O(B)
  7. Updating a Key in a Node:

    • O(logN)

The B+ tree's time complexity is determined by its height (number of levels) and its branching factor (number of children per node). In an ideal scenario, with a balanced B+ tree, the height remains relatively small compared to the number of elements (N), resulting in efficient search, insertion, and deletion operations.

The reason for the relatively low time complexity is that B+ trees are designed to be balanced and ensure that the number of keys in each node is within a certain range. This balance, along with the use of linked lists to navigate between nodes at the same level, helps maintain efficient data access.

It's important to note that the time complexities provided are general estimates and may vary depending on factors such as implementation details, system characteristics, and workload patterns. In practice, B+ trees are highly efficient data structures for managing large datasets and providing fast access to data.

In a B+ tree, the branching factor refers to the maximum number of child nodes a parent node can have. It is a fundamental property of the B+ tree structure and plays a crucial role in determining the height and overall efficiency of the tree. A higher branching factor generally leads to a shallower tree, which results in faster search, insertion, and deletion operations.

Let's look at an example of a B+ tree with a branching factor of 3:

                15
           /     |     \
         5      10      20
       /  \    /  \    /  \
      1    3  8   12  18   25
Enter fullscreen mode Exit fullscreen mode

In this example, the branching factor is 3, which means each internal node can have up to 3 children. Here's how the branching factor affects the tree:

  • The root node has 3 children (15, 5, 20).
  • Each internal node (non-leaf node) has up to 3 children.
  • Each leaf node contains data entries and points to the actual data records.

As a result of the branching factor, the B+ tree provides the following benefits:

  1. Shallower Tree: With a higher branching factor, the tree becomes shallower, which reduces the number of levels that need to be traversed during search operations. This leads to faster data retrieval.

  2. Reduced I/O Operations: Shallow trees result in fewer I/O operations since fewer nodes need to be accessed to locate the desired data.

  3. Efficient Use of Memory: B+ trees use internal nodes primarily for routing purposes, which allows for efficient use of memory. Leaf nodes store the actual data, and internal nodes help navigate to the correct leaf node.

  4. Balanced Structure: B+ trees maintain a balanced structure, ensuring that the number of keys in each node is within a certain range. This balance contributes to consistent performance across various operations.

While a higher branching factor offers advantages, there are trade-offs. A higher branching factor can increase the width of each node, potentially requiring more memory for storage. Additionally, updates and insertions may involve more frequent node splits, which can impact performance.

In practice, the choice of branching factor depends on factors such as the size of the data, available memory, and the specific use case. B+ trees with appropriate branching factors are widely used in database systems and file systems to efficiently manage and retrieve data.

A B+ tree with a low branching factor has fewer child nodes for each parent node compared to a B+ tree with a higher branching factor. While a lower branching factor can lead to deeper trees and potentially slower search operations, there are scenarios where a lower branching factor might be preferred or necessary. Let's explore an example of a B+ tree with a low branching factor of 2:

                 10
           /              \
         5                15
       /   \            /    \
     1      3        12     18
Enter fullscreen mode Exit fullscreen mode

In this example, the B+ tree has a branching factor of 2, which means each internal node can have up to 2 children. Here's how the low branching factor affects the tree:

  • The root node has 2 children (10, 15).
  • Each internal node has up to 2 children.
  • Each leaf node contains data entries and points to the actual data records.

Considerations for a B+ tree with a low branching factor:

  1. Deeper Tree: A lower branching factor results in a deeper tree structure compared to a tree with a higher branching factor. This can lead to longer paths from the root to the leaf nodes, which may impact the efficiency of search operations.

  2. More I/O Operations: Deeper trees may require more I/O operations to traverse multiple levels when searching for data. This can potentially slow down data retrieval.

  3. Memory Usage: B+ trees with a low branching factor may require more memory for storing internal nodes, as there are more levels in the tree. This can impact the overall memory usage of the data structure.

  4. Insertions and Deletions: Tree maintenance operations such as node splits and merges may occur more frequently in a low branching factor tree, especially if the tree is not well-balanced.

  5. Use Cases: B+ trees with a low branching factor might be suitable for scenarios where memory constraints or specific performance requirements dictate a trade-off between tree depth and memory usage.

It's important to note that the choice of branching factor depends on the specific use case, data characteristics, and performance considerations. While a low branching factor can have certain drawbacks, it might be a suitable choice for scenarios where memory usage needs to be optimized or when the nature of the data retrieval operations aligns well with a deeper tree structure.

Top comments (0)