- SQL stands for Structured Query Language
- SQL lets you access and manipulate databases
- SQL is an ANSI (American National Standards Institute) standard
What is SQL
Server?
· It is a software, developed by Microsoft, which is implemented
from the specification of RDBMS.
· It is also an ORDBMS.
· It is platform dependent.
· It is both GUI and command based software.
· It supports SQL (SEQUEL) language which is an IBM product,
non-procedural, common database and case insensitive language.
Usage of SQL Server
- To create
databases.
- To maintain databases.
- To analyze the data through SQL
Server Analysis Services (SSAS).
- To generate reports through SQL
Server Reporting Services (SSRS).
- To carry out ETL operations
through SQL Server Integration Services (SSIS)
SQL Server Components
SQL Server works in
client-server architecture, hence it supports two types of components − (a)
Workstation and (b) Server.
· Workstation
components are installed in every
device/SQL Server operator’s machine. These are just interfaces to interact
with Server components. Example: SSMS, SSCM, Profiler, BIDS, SQLEM etc.
· Server components are installed in centralized server. These are services. Example: SQL Server, SQL Server Agent, SSIS, SSAS, SSRS, SQL browser, SQL Server full text search etc.
SQL Queries Optimization Tips
Every time we writes lot of SQL
queries. But most of the time we don’t consider its efficiency. Query
optimization is the vital process for any database related applications. Its
importance is high. It can makes any complex application simple. This article
describes how we can optimize SQL query? Summary of the article:
v What is SQL Query Optimization?
v Some SQL Query Optimization Tips
What is SQL Query Optimization?
Query Optimization is the process
to write the query in such a way that it execute quickly. It is very important
for any standard application.
Some SQL Query Optimization Tips
All time we use lot of SQL queries
in our application. But we don’t consider about its performance. If we follow
some tips then our query will be more efficient. Some of them are.
v Use views and stored procedures
instead of heavy-duty queries. It
reduces network traffic
v Try to use constraints instead of
triggers, whenever possible. Constraints
are much more efficient than triggers and can boost performance
v Use table variables instead of
temporary tables
v Try to use UNION ALL statement
instead of UNION. The UNION ALL statement is much faster than UNION, because
UNION ALL statement does not look for duplicate rows, and UNION statement does
look for duplicate rows, whether or not they exist
v Try to avoid using the DISTINCT
clause
v Try to avoid using SQL Server
cursors, whenever possible. If required must use cursor standard way.
v Try to avoid the HAVING clause
v Include SET NOCOUNT ON statement
into stored procedures. It shows the number of rows affected by a T-SQL
statement. This can reduce network traffic, because your client will not want
to see this
v Try to return only the required
column rather than all columns of a table
v Don’t use unnecessary complex join
v Don’t use more number of trigger
v Use indexing in the table and
follow its standard
- Buffer management
- Logging and Transaction
- Concurrency and locking
- Replication services
- Analysis services
- Notification services
- Integration services
- Full text search service
- Stored procedures
- Triggers
- Views
- Sub-SELECTs (i.e. nested SELECTs)

Instance of SQL Server
- An instance
is an installation of SQL Server.
- An instance is an exact copy of the same
software.
- If we install 'n' times, then 'n' instances
will be created.
- There are two types of instances in SQL Server
a) Default b) Named.
- Only one default instance will be supported in
one Server.
- Multiple named instances will be supported in
one Server.
- Default instance will take the server name as
Instance name.
- Default instance service name is MSSQLSERVER.
- 16 instances will be supported in 2000 version.
- 50 instances will supported in 2005 and later
versions.
Advantages of Instances
- To install different versions in one machine.
- To reduce cost.
- To maintain production, development, and test
environments separately.
- To reduce temporary database problems.
- To separate security privileges.
- To maintain standby server.
We have classified the
architecture of SQL Server into the following parts for easy understanding.
- General architecture
- Memory architecture
- Data file architecture
- Log file architecture
General
Architecture
Client − Where the request initiated.
Query − SQL query which is high level language.
Logical
Units − Keywords, expressions and
operators, etc.
N/W
Packets − Network related code.
Protocols − In SQL Server we have 4 protocols.
·
Shared memory (for
local connections and troubleshooting purpose).
·
Named pipes (for
connections which are in LAN connectivity).
·
TCP/IP (for
connections which are in WAN connectivity).
·
VIA-Virtual Interface
Adapter (requires special hardware to set up by vendor and also deprecated from
SQL 2012 version).
Server − Where SQL Services got installed and databases
reside.
Relational
Engine − This is where real execution
will be done. It contains Query parser, Query optimizer and Query executor.
Query
Parser (Command Parser) and Compiler (Translator) − This will check syntax of the query and it will
convert the query to machine language.
Query
Optimizer − It will prepare the
execution plan as output by taking query, statistics and Algebrizer tree as
input.
Execution
Plan − It is like a roadmap, which
contains the order of all the steps to be performed as part of the query
execution.
Query
Executor − This is where the query
will be executed step by step with the help of execution plan and also the
storage engine will be contacted.
Storage
Engine − It is responsible for
storage and retrieval of data on the storage system (disk, SAN, etc.,), data
manipulation, locking and managing transactions.
SQL
OS − This lies between the host
machine (Windows OS) and SQL Server. All the activities performed on database
engine are taken care of by SQL OS. SQL OS provides various operating system
services, such as memory management deals with buffer pool, log buffer and
deadlock detection using the blocking and locking structure.
Checkpoint
Process − Checkpoint is an internal
process that writes all dirty pages (modified pages) from Buffer Cache to
Physical disk. Apart from this, it also writes the log records from log buffer
to physical file. Writing of Dirty pages from buffer cache to data file is also
known as Hardening of dirty pages.
It
is a dedicated process and runs automatically by SQL Server at specific
intervals. SQL Server runs checkpoint process for each database individually.
Checkpoint helps to reduce the recovery time for SQL Server in the event of
unexpected shutdown or system crash\Failure.
Checkpoints in SQL Server
In SQL Server 2012 there
are four types of checkpoints −
· Automatic − This is the most common checkpoint which runs as a
process in the background to make sure SQL Server Database can be recovered in
the time limit defined by the Recovery Interval − Server Configuration Option.
· Indirect − This is new in SQL Server 2012. This also runs in
the background but to meet a user-specified target recovery time for the
specific database where the option has been configured. Once the
Target_Recovery_Time for a given database has been selected, this will override
the Recovery Interval specified for the server and avoid automatic checkpoint
on such DB.
· Manual − This one runs just like any other T-SQL statement,
once you issue checkpoint command it will run to its completion. Manual
checkpoint runs for your current database only. You can also specify the
Checkpoint_Duration which is optional - this duration specifies the time in
which you want your checkpoint to complete.
· Internal − As a user you can’t control internal checkpoint.
Issued on specific operations such as
Shutdown initiates a checkpoint operation on all databases except when shutdown is not clean (shutdown with nowait).
If the recovery model gets changed from Full\Bulk-logged to Simple.
While taking backup of the database.
If your DB is in simple recovery model, checkpoint process executes automatically either when the log becomes 70% full, or based on Server option-Recovery Interval.
Alter database command to add or remove a data\log file also initiates a checkpoint.
Checkpoint also takes place when the recovery model of the
DB is bulk-logged and a minimally logged operation is performed.
DB Snapshot creation.
·
Lazy Writer Process − Lazy writer will push dirty pages to disk for an
entirely different reason, because it needs to free up memory in the buffer
pool. This happens when SQL server comes under memory pressure. As far as I am
aware, this is controlled by an internal process and there is no setting for
it.
SQL
server constantly monitors memory usage to assess resource contention (or
availability); its job is to make sure that there is a certain amount of free
space available at all times. As part of this process, when it notices any such
resource contention, it triggers Lazy Writer to free up some pages in memory by
writing out dirty pages to disk. It employs Least Recently Used (LRU) algorithm
to decide which pages are to be flushed to the disk.
If
Lazy Writer is always active, it could indicate memory bottleneck.
Memory Architecture
Following are some of the
salient features of memory architecture.
· One of the primary
design goals of all database software is to minimize disk I/O because disk
reads and writes are among the most resource-intensive operations.
· Memory in windows can
be called with Virtual Address Space, shared by Kernel mode (OS mode) and User
mode (Application like SQL Server).
· SQL Server "User
address space" is broken into two regions: MemToLeave and Buffer Pool.
· Size of MemToLeave
(MTL) and Buffer Pool (BPool) is determined by SQL Server during startup.
· Buffer management is a key component in achieving I/O highly
efficiency. The buffer management component consists of two mechanisms: the
buffer manager to access and update database pages, and the buffer pool to
reduce database file I/O.
· The buffer pool is further
divided into multiple sections. The most important ones being the buffer cache
(also referred to as data cache) and procedure cache. Buffer cache holds
the data pages in memory so that frequently accessed data can be retrieved from
cache. The alternative would be reading data pages from the disk. Reading data
pages from cache optimizes performance by minimizing the number of required I/O
operations which are inherently slower than retrieving data from the memory.
· Procedure cache keeps the stored procedure and query execution plans
to minimize the number of times that query plans have to be generated. You can
find out information about the size and activity within the procedure cache
using DBCC PROCCACHE statement.
Other
portions of buffer pool include −
· System level data
structures − Holds SQL Server
instance level data about databases and locks.
· Log cache − Reserved for reading and writing transaction log
pages.
· Connection context − Each connection to the instance has a small area of
memory to record the current state of the connection. This information includes
stored procedure and user-defined function parameters, cursor positions and
more.
· Stack space − Windows allocates stack space for each thread
started by SQL Server.
Data File Architecture
Data
File architecture has the following components −
File Groups
Database
files can be grouped together in file groups for allocation and administration
purposes. No file can be a member of more than one file group. Log files are
never part of a file group. Log space is managed separately from data space.
There
are two types of file groups in SQL Server, Primary and User-defined. Primary
file group contains the primary data file and any other files not specifically
assigned to another file group. All pages for the system tables are allocated
in the primary file group. User-defined file groups are any file groups
specified using the file group keyword in create database or alter database
statement.
One
file group in each database operates as the default file group. When SQL Server
allocates a page to a table or index for which no file group was specified when
they were created, the pages are allocated from default file group. To switch
the default file group from one file group to another file group, it should
have db_owner fixed db role.
By
default, primary file group is the default file group. User should have
db_owner fixed database role in order to take backup of files and file groups
individually.
Files
Databases
have three types of files - Primary data file, Secondary data file, and Log
file. Primary data file is the starting point of the database and points to the
other files in the database.
Every
database has one primary data file. We can give any extension for the primary
data file but the recommended extension is .mdf. Secondary data
file is a file other than the primary data file in that database. Some
databases may have multiple secondary data files. Some databases may not have a
single secondary data file. Recommended extension for secondary data file
is .ndf.
Log
files hold all of the log information used to recover the database. Database
must have at least one log file. We can have multiple log files for one
database. The recommended extension for log file is .ldf.
The
location of all the files in a database are recorded in both master database
and the primary file for the database. Most of the time, the database engine
uses the file location from the master database.
Files
have two names − Logical and Physical. Logical name is used to refer to the
file in all T-SQL statements. Physical name is the OS_file_name, it must follow
the rules of OS. Data and Log files can be placed on either FAT or NTFS file
systems, but cannot be placed on compressed file systems. There can be up to
32,767 files in one database.
Extents
Extents
are basic unit in which space is allocated to tables and indexes. An extent is
8 contiguous pages or 64KB. SQL Server has two types of extents - Uniform and
Mixed. Uniform extents are made up of only single object. Mixed extents are
shared by up to eight objects.
Pages
It
is the fundamental unit of data storage in MS SQL Server. The size of the page
is 8KB. The start of each page is 96 byte header used to store system
information such as type of page, amount of free space on the page and object
id of the object owning the page. There are 9 types of data pages in SQL
Server.
·
Data − Data rows with all data except text, ntext and
image data.
·
Index − Index entries.
·
Tex\Image − Text, image and ntext data.
·
GAM − Information about allocated extents.
·
SGAM − Information about allocated extents at system
level.
·
Page Free Space
(PFS) − Information about free space
available on pages.
·
Index Allocation
Map (IAM) − Information about extents
used by a table or index.
·
Bulk Changed Map
(BCM) − Information about extents
modified by bulk operations since the last backup log statement.
·
Differential
Changed Map (DCM) − Information about
extents that have changed since the last backup database statement.
Log File Architecture
The
SQL Server transaction log operates logically as if the transaction log is a
string of log records. Each log record is identified by Log Sequence Number
(LSN). Each log record contains the ID of the transaction that it belongs to.
Log
records for data modifications record either the logical operation performed or
they record the before and after images of the modified data. The before image
is a copy of the data before the operation is performed; the after image is a
copy of the data after the operation has been performed.
The
steps to recover an operation depend on the type of log record −
- Logical operation logged.
- To roll
the logical operation forward, the operation is performed again.
- To roll
the logical operation back, the reverse logical operation is performed.
- Before and after image logged.
- To roll
the operation forward, the after image is applied.
- To roll
the operation back, the before image is applied.
Different
types of operations are recorded in the transaction log. These operations
include −
·
The start and end of
each transaction.
·
Every data
modification (insert, update, or delete). This includes changes by system
stored procedures or data definition language (DDL) statements to any table,
including system tables.
·
Every extent and page
allocation or de allocation.
·
Creating or dropping a
table or index.
Rollback
operations are also logged. Each transaction reserves space on the transaction
log to make sure that enough log space exists to support a rollback that is
caused by either an explicit rollback statement or if an error is encountered.
This reserved space is freed when the transaction is completed.
The
section of the log file from the first log record that must be present for a
successful database-wide rollback to the last-written log record is called the
active part of the log, or the active log. This is the section of the log
required to a full recovery of the database. No part of the active log can ever
be truncated. LSN of this first log record is known as the minimum recovery LSN
(Min LSN).
The
SQL Server Database Engine divides each physical log file internally into a
number of virtual log files. Virtual log files have no fixed size, and there is
no fixed number of virtual log files for a physical log file.
The
Database Engine chooses the size of the virtual log files dynamically while it
is creating or extending log files. The Database Engine tries to maintain a
small number of virtual files. The size or number of virtual log files cannot
be configured or set by administrators. The only time virtual log files affect
system performance is if the physical log files are defined by small size and
growth_increment values.
The
size value is the initial size for the log file and the growth_increment value
is the amount of space added to the file every time new space is required. If the
log files grow to a large size because of many small increments, they will have
many virtual log files. This can slow down database startup and also log backup
and restore operations.
We
recommend that you assign log files a size value close to the final size
required, and also have a relatively large growth_increment value. SQL Server
uses a write-ahead log (WAL), which guarantees that no data modifications are
written to disk before the associated log record is written to disk. This
maintains the ACID properties for a transaction.
SQL Fundamentals
- Introduction
- Retrieving
Data Using the SQL SELECT Statement
- Restricting
and Sorting Data
- Using
Single-Row Functions to Customize Output
- Using
Conversion Functions and Conditional Expressions
- Reporting
Aggregated Data Using the Group Functions
- Displaying
Data from Multiple Tables
- Using
Subqueries to Solve Queries
- Using the
Set Operators
- Manipulating
Data
- Using DDL
Statements to Create and Manage Tables
- Creating
Other Schema Objects
SQL Commands: Different Types Of Keys In
Database
There are mainly 7 types of Keys, that can be
considered in a database. I am going to consider the below tables to explain to
you the various keys.
SQL Commands
v SQL
commands are instructions. It is used to communicate with the database. It is
also used to perform specific tasks, functions, and queries of data.
v SQL can
perform various tasks like create a table, add data to tables, drop the table,
modify the table, set permission for users.
Types of SQL Commands
There are five types of SQL commands: DDL, DML, DCL, TCL, and DQL.
1. Data Definition
Language (DDL)
ü DDL
changes the structure of the table like creating a table, deleting a table,
altering a table, etc.
ü All
the command of DDL are auto-committed that means it permanently save all the
changes in the database.
Here are some commands that come under DDL:
ü CREATE
ü ALTER
ü DROP
ü TRUNCATE
2. Data Manipulation Language
ü DML
commands are used to modify the database. It is responsible for all form of
changes in the database.
ü The
command of DML is not auto-committed that means it can't permanently save all
the changes in the database. They can be rollback.
Here are some commands that come under DML:
ü INSERT
ü UPDATE
ü DELETE
3. Data Control Language
DCL commands are used to grant and take back
authority from any database user.
Here are some commands that come under DCL:
ü Grant
ü Revoke
4. Transaction Control Language
TCL commands can only use with DML commands like
INSERT, DELETE and UPDATE only.
These operations are automatically committed in
the database that's why they cannot be used while creating tables or dropping
them.
DBMS Architecture
ü The
DBMS design depends upon its architecture. The basic client/server architecture
is used to deal with a large number of PCs, web servers, database servers and
other components that are connected with networks.
ü The
client/server architecture consists of many PCs and a workstation which are
connected via the network.
ü DBMS
architecture depends upon how users are connected to the database to get their
request done.
Types of DBMS Architecture
Database architecture can
be seen as a single tier or multi-tier. But logically, database architecture is
of two types like: 2-tier
architecture and 3-tier
architecture.
1-Tier
Architecture
ü In
this architecture, the database is directly available to the user. It means the
user can directly sit on the DBMS and uses it.
ü Any
changes done here will directly be done on the database itself. It doesn't
provide a handy tool for end users.
ü The
1-Tier architecture is used for development of the local application, where
programmers can directly communicate with the database for the quick response.
2-Tier Architecture
ü The
2-Tier architecture is same as basic client-server. In the two-tier
architecture, applications on the client end can directly communicate with the
database at the server side. For this interaction, API's like: ODBC, JDBC are used.
ü The
user interfaces and application programs are run on the client-side.
ü The
server side is responsible to provide the functionalities like: query
processing and transaction management.
ü To
communicate with the DBMS, client-side application establishes a connection
with the server side.
3-Tier Architecture
ü The
3-Tier architecture contains another layer between the client and server. In
this architecture, client can't directly communicate with the server.
ü The
application on the client-end interacts with an application server which
further communicates with the database system.
ü End
user has no idea about the existence of the database beyond the application
server. The database also has no idea about any other user beyond the
application.
ü The
3-Tier architecture is used in case of large web application.
SQL Server
Joins
In a relational database, data is distributed
in multiple logical tables. To get a complete meaningful set of data, you need
to query data from these tables by using joins. SQL Server supports many kinds
of joins including inner join, left join, right join, full outer join, and
cross join. Each join type specifies how SQL Server uses data from one table to
select rows in another table.
Let’s set up sample tables for
demonstration.
Setting up sample tables
First, create a new schema named hr
:
CREATE
SCHEMA hr;
GO
Second, create two new tables named candidates and employees in the hr schema:
CREATE
TABLE hr.candidates(
id
INT PRIMARY
KEY
IDENTITY,
fullname
VARCHAR(
100)
NOT
NULL
);
CREATE
TABLE hr.employees(
id
INT PRIMARY
KEY
IDENTITY,
fullname
VARCHAR(
100)
NOT
NULL
);
INSERT
INTO
hr.candidates(fullname)
VALUES
(
'John Doe'),
(
'Lily Bush'),
(
'Peter Drucker'),
(
'Jane Doe');
INSERT
INTO
hr.employees(fullname)
VALUES
(
'John Doe'),
(
'Jane Doe'),
(
'Michael Scott'),
(
'Jack Sparrow');
Let’s call the candidates
table the left table and the employees
table
the right table.
SQL Server Inner Join
Inner join produces a data set that includes
rows from the left table which have matching rows from the right table.
The following example uses the inner join clause to get the
rows from the candidates
table that have the
corresponding rows with the same values in the fullname
column
of the employees
table:
SELECT
c.id candidate_id,
c.fullname candidate_name,
e.id employee_id,
e.fullname employee_name
FROM
hr.candidates c
INNER
JOIN hr.employees e
ON e.fullname = c.fullname;
Here is the output:
The following Venn diagram
illustrates the result of the inner join of two result sets:
SQL Server Left Join
Left join selects data starting from
the left table and matching rows in the right table. The left join returns all
rows from the left table and the matching rows from the right table. If a row
in the left table does not have a matching row in the right table, the columns
of the right table will have nulls.
The left join is also known as
left outer join. The outer keyword is optional.
The following statement joins the candidates
table
with the employees
table using left join:
SELECT
c.id candidate_id,
c.fullname candidate_name,
e.id employee_id,
e.fullname employee_name
FROM
hr.candidates c
LEFT
JOIN hr.employees e
ON e.fullname = c.fullname;
Here is the output:
The following Venn diagram
illustrates the result of the left join of two result sets:
To get the rows that available only in the left table but
not in the right table, you add a WHERE
clause to the above query:
SELECT
c.id candidate_id,
c.fullname candidate_name,
e.id employee_id,
e.fullname employee_name
FROM
hr.candidates c
LEFT
JOIN hr.employees e
ON e.fullname = c.fullname
WHERE
e.id
IS
NULL;
The following picture shows the output:
And the following Venn diagram
illustrates the result of the left join that selects rows available only
in the left table:
SQL Server Right Join
The right join or right
outer join selects data starting from the right table. It is a reversed
version of the left join.
The right join returns a result
set that contains all rows from the right table and the matching rows in the
left table. If a row in the right table that does not have a matching row in
the left table, all columns in the left table will contain nulls.
The following example uses the right join to
query rows from candidates
and employees
tables:
SELECT
c.id candidate_id,
c.fullname candidate_name,
e.id employee_id,
e.fullname employee_name
FROM
hr.candidates c
RIGHT
JOIN hr.employees e
ON e.fullname = c.fullname;
Here is the output:
Notice that all rows from the right table (employees
)
are included in the result set.
And the Venn diagram that illustrates the right
join of two result sets:
Similarly, you can get rows that are available only in the right table by adding a WHERE clause to the above query as follows:
SELECT
c.id candidate_id,
c.fullname candidate_name,
e.id employee_id,
e.fullname employee_name
FROM
hr.candidates c
RIGHT
JOIN hr.employees e
ON e.fullname = c.fullname
WHERE
c.id
IS
NULL;
Here is the output:
And Venn diagram that
illustrates the operation:
SQL Server full join
The full outer join or full join
returns a result set that contains all rows from both left and right tables,
with the matching rows from both sides where available. In case there is no
match, the missing side will have NULL values.The following example
shows how to perform a full join between the candidates
and employees
tables:
SELECT
c.id candidate_id,
c.fullname candidate_name,
e.id employee_id,
e.fullname employee_name
FROM
hr.candidates c
FULL
JOIN hr.employees e
ON e.fullname = c.fullname;
Here is the output:
The Venn diagram that
illustrates the full outer join:
To select rows that exist either left or right table, you exclude rows that are common to both tables by adding a
WHERE clause as shown in the following query:
SELECT
c.id candidate_id,
c.fullname candidate_name,
e.id employee_id,
e.fullname employee_name
FROM
hr.candidates c
FULL
JOIN hr.employees e
ON e.fullname = c.fullname
WHERE
c.id
IS
NULL
OR
e.id
IS
NULL;
Here is the output:
And the Venn diagram that
illustrates the above operation:
SQL Server Analysis Services (SSAS) is the technology from the Microsoft
Business Intelligence stack, to develop Online Analytical Processing (OLAP)
solutions. In simple terms, you can use SSAS to create cubes using data from
data marts / data warehouse for deeper and faster data analysis.
Cubes are multi-dimensional data sources which have dimensions and facts (also
known as measures) as its basic constituents. From a relational perspective
dimensions can be thought of as master tables and facts can be thought of as
measureable details. These details are generally stored in a pre-aggregated
proprietary format and users can analyze huge amounts of data and slice
this data by dimensions very easily. Multi-dimensional expression (MDX) is the
query language used to query a cube, similar to the way T-SQL is used to query
a table in SQL Server.
Simple examples of dimensions can be product / geography / time / customer, and
similar simple examples of facts can be orders / sales. A typical analysis
could be to analyze sales in Asia-pacific geography during the past 5
years. You can think of this data as a pivot table where geography is the
column-axis and years is the row axis, and sales can be seen as the values.
Geography can also have its own hierarchy like Country->City->State. Time
can also have its own hierarchy like Year->Semester->Quarter. Sales could
then be analyzed using any of these hierarchies for effective data
analysis.
A typical higher level cube development process using SSAS involves the following
steps:
1) Reading data from a dimensional model
2) Configuring a schema in BIDS (Business Intelligence Development Studio)
3) Creating dimensions, measures and cubes from this schema
4) Fine tuning the cube as per the requirements
5) Deploying the cube
In this tutorial we will step through a number of topics that you need to
understand in order to successfully create a basic cube. Our high level outline
is as follows:
- Design and develop a star-schema
- Create dimensions, hierarchies, and cubes
- Process and deploy a cube
- Develop calculated measures and named sets using MDX
- Browse the cube data using Excel as the client tool
When you start learning SSAS,
you should have a reasonable relational database background. But when
you start working in a multi-dimensional environment, you need to stop
thinking from a two-dimensional (relational database) perspective, which
will develop over time.
In this tutorial, we will also try to develop an understanding of OLAP development from the eyes of an OLTP practitioner.
SQL Server Integration Services (SSIS)
SQL Server Integration Services (SSIS) is a tool that we use to perform
ETL operations; i.e. extract, transform and load data. While ETL
processing is common in data warehousing (DW) applications, SSIS is by no means
limited to just DW; e.g. when you create a Maintenance Plan using SQL Server
Management Studio (SSMS) an SSIS package is created. At a high level,
SSIS provides the ability to:
- Retrieve data from just about any source
- Perform various transformations on the data; e.g.
convert from one type to another, convert to uppercase or lowercase,
perform calculations, etc.
- Load data into just about any source
- Define a workflow
The first version of SSIS was
released with SQL Server 2005. SSIS is a replacement for Data
Transformation Services (DTS) which was available with SQL Server 7.0 and SQL
Server 2000. SSIS builds on the capabilities introduced with DTS.
In this tutorial we will step
through a number of topics that you need to understand in order to successfully
build an SSIS package. Our high level outline is as follows:
- Creating SSIS packages with SQL Server Management
Studio (SSMS)
- Business Intelligence Development Studio (BIDS)
- Creating a simple SSIS package in BIDS
- Deploying SSIS packages
- Executing SSIS packages
SQL Server Reporting Services (SSRS)
SQL Server Reporting Services 2008 (SSRS) is a
feature included in the SQL Server 2008 product. We use SSRS to design,
develop, test, and deploy reports. SSRS was originally slated to be
released with SQL Server 2005 but it wound up being released a little bit ahead
of SQL Server 2005. SSRS leverages the Business Intelligence Development
Studio (BIDS) developer tool for all aspects of authoring and deploying
reports. BIDS is included with SQL Server 2008.
In this tutorial we will step
through a number of topics that you need to understand to successfully build a
report. Our high level outline is as follows:
- Reporting Services Components
- Install Reporting Services
- Business Intelligence Development Studio (BIDS)
- Install Sample Database
- Create a Simple Report with the Wizard
- Create a Simple Report with the Report Designer
- Deploy Reports
- Configure Report Manager Security
The following are the main points
about the Windows service:
- The HTTP listener is a new feature; prior versions of
Reporting Services required Internet Information Server (IIS). With
Reporting Services 2008 IIS is no longer required.
- Report Manager is an ASP.NET application that provides
a browser-based interface for managing Reporting Services. We will
cover the Report Manager in a later section.
- The Web Service is also implemented as an ASP.NET
application; it provides a programmatic interface to Reporting
Services. The Report Manager uses the web service. You can use
the web service to create your own custom implementation of any feature
provided by the Report Manager.
- Background Processing provides the core services for
Reporting Services.
- The Report Manager, Web Service, and Background
Processing are each implemented as separate application domains.
Bulk Insert data using Store procedure
C#, ASP.NET with Window Form Application, Silverlight, WPF, ASP.NET MVC, ASP.NET CORE,Web Service, Web API, Jquery, Bootstraps, Ajax, Javascript , SAP Crystal Report, MS SQL Server, Oracle and Live Projects.
No comments:
Post a Comment