CASE (Transact SQL)

Evaluates a list of conditions and returns one of multiple possible result expressions.

CASE has two formats:

· The simple CASE function compares an expression to a set of simple expressions to determine the result.

· The searched CASE function evaluates a set of Boolean expressions to determine the result.

Examples:

A. Using a SELECT statement with a simple CASE function

Within a SELECT statement, a simple CASE function allows for only an equality check; no other comparisons are made. The following example uses the CASE function to change the display of product line categories to make them more understandable.

USE AdventureWorks;
GO
SELECT   ProductNumber, Category =
CASE ProductLine
WHEN ‘R’ THEN ‘Road’
WHEN ‘M’ THEN ‘Mountain’
WHEN ‘T’ THEN ‘Touring’
WHEN ‘S’ THEN ‘Other sale items’
ELSE ‘Not for sale’
END,
Name
FROM Production.Product
ORDER BY ProductNumber;
GO

B. Using a SELECT statement with a searched CASE function

Within a SELECT statement, the searched CASE function allows for values to be replaced in the result set based on comparison values. The following example displays the list price as a text comment based on the price range for a product.

USE AdventureWorks;
GO
SELECT   ProductNumber, Name, ‘Price Range’ =
CASE
WHEN ListPrice =  0 THEN ‘Mfg item – not for resale’
WHEN ListPrice < 50 THEN ‘Under $50’
WHEN ListPrice >= 50 and ListPrice < 250 THEN ‘Under $250’
WHEN ListPrice >= 250 and ListPrice < 1000 THEN ‘Under $1000’
ELSE ‘Over $1000’
END
FROM Production.Product
ORDER BY ProductNumber ;
GO


Bookmark and Share

Paging in SQL Server

Visit: https://zonixsoft.com (our official Website)

Web Application Performance (Paging in Sql Server)

As a Web developer, you know by now that using the default paging capabilities of ASP.NET Webcontrols like DataGrid and GridView cause the Performance issue when we have thousands and thousands of records in our database; because with every roundtrip to the data-server, you get ALL the records ALL of the time. This is fine perhaps for very small databases. But in web application, you always have to count on its Performance.

One alternate approach to this scenario is to serve “On Demand” records. That is; for example; retrieving 1-100 records, then 101-200 records result set based on some event rather than retrieving all the records all of the time. I am talking about “Paging in Sql Server”.

My-SQL

The simplest solution is provided by My-SQL using LIMIT keyword. Check out the below Query:

Select * From Product Limits 15,5

It will retrieve results from 16 to 20 records. In above Query, 15 if the offset from where you want the result set records and 5 is the number of records you want to retrieve.

Sql Server 2005

Unfortunately, SQL Server does not have an equivalent to “Limit” keyword. Its nearest is TOP N, which returns the first N rows. Still there are two alternative than we can use for “Paging”.

Using ROW_NUMBER()

Sql 2005 includes the ROW_NUMBER() function, which adds an integer field to each record. In other words, it adds the record’s position within the result set as an additional field so that the first record has a 1, the second a 2, etc.

To ensure the numbering is consistent, however, SQL Server needs to know how to sort the data. Because of this, ROW_NUMBER() must immediately be followed by the OVER() function. OVER() has one required parameter, which is an ORDER BY clause. The basic syntax is:

SELECT ROW_NUMBER() OVER(ORDER BY [Name]) as RowNum,

[Name], [SKU], [ManufacturerPartNumber]

FROM Product

The Product data in result set will be appeared sorted by Name, and it has an extra column indicating each record’s position within the results.

If we want to limit the results displayed to a certain range, we need to nest this SELECT inside another one. To limit our results to records 5 to 10, we can use the following query:

SELECT * FROM

(SELECT ROW_NUMBER() OVER(ORDER BY [Name]) AS RowNum,

[Name], [SKU], [ManufacturerPartNumber] FROM Product) AS Product1

WHERE RowNum >= 5 AND RowNum <= 10

With Nested Queries

There is one another solution that consists only of 1 sql-statement, and so is efficient and quick, even with large databases. The good news is : it will work ! Always ! it’s looking rather a bit complicated; but its not!

Let us see the Example directly to limit our results to records 6 through 10,

SELECT P.[Name], P.[SKU], P.[ManufacturerPartNumber] FROM (

SELECT TOP 5 [ProductID],[Name] FROM (

SELECT TOP 10 [ProductID],[Name]

FROM Product

ORDER BY Product.[Name] ASC) as Product1

ORDER by Product1.[Name] DESC) as Product2

INNER JOIN Product P ON Product2.ProductID = P.ProductID

ORDER by Product2.[Name] ASC

Where 5 is the number of rows you want to retrieve and 10 is the offset.

The innermost Sql Statement will fetch 10 records (1 to 10) Order by Product Name in Ascending Order.

This means that if you have a table with 10,000 records, and you want to have the last 10 records, the innermost SELECT statement will indeed retrieve 10,000 records. This of course has it’s ramifications on performance, but since it’s only the primary key fields (which are indexed) and optionally some sort fields, the impact will be minimal.

The middle Sql Statement will fetch 5 records (10 to 6) Order by Product Name in Descending Order. And Finally the Outer Sql Stateement is simply select statement that will fetch the result in Ascending Order (6 to 10). In Innermost and Midddle Sql Statement only primary key fields or the fields which are indexed are used.

After looking at this in Query Analyzer, it appears that the extra nesting does not add very much to the load. The SQL statement plays with the sorting orders to limit the records, thus resulting in paged recordsets. This means that there should be at least ONE field to be sorted. If there isn’t any, sort on the primary key field(s) !


Bookmark and Share

Paging in SQL Server

Visit: https://zonixsoft.com (our official Website)

Web Application Performance (Paging in Sql Server)

As a Web developer, you know by now that using the default paging capabilities of ASP.NET Webcontrols like DataGrid and GridView cause the Performance issue when we have thousands and thousands of records in our database; because with every roundtrip to the data-server, you get ALL the records ALL of the time. This is fine perhaps for very small databases. But in web application, you always have to count on its Performance.

One alternate approach to this scenario is to serve “On Demand” records. That is; for example; retrieving 1-100 records, then 101-200 records result set based on some event rather than retrieving all the records all of the time. I am talking about “Paging in Sql Server”.

My-SQL

The simplest solution is provided by My-SQL using LIMIT keyword. Check out the below Query:

Select * From Product Limits 15,5

It will retrieve results from 16 to 20 records. In above Query, 15 if the offset from where you want the result set records and 5 is the number of records you want to retrieve.

Sql Server 2005

Unfortunately, SQL Server does not have an equivalent to “Limit” keyword. Its nearest is TOP N, which returns the first N rows. Still there are two alternative than we can use for “Paging”.

Using ROW_NUMBER()

Sql 2005 includes the ROW_NUMBER() function, which adds an integer field to each record. In other words, it adds the record’s position within the result set as an additional field so that the first record has a 1, the second a 2, etc.

To ensure the numbering is consistent, however, SQL Server needs to know how to sort the data. Because of this, ROW_NUMBER() must immediately be followed by the OVER() function. OVER() has one required parameter, which is an ORDER BY clause. The basic syntax is:

SELECT ROW_NUMBER() OVER(ORDER BY [Name]) as RowNum,

[Name], [SKU], [ManufacturerPartNumber]

FROM Product

The Product data in result set will be appeared sorted by Name, and it has an extra column indicating each record’s position within the results.

If we want to limit the results displayed to a certain range, we need to nest this SELECT inside another one. To limit our results to records 5 to 10, we can use the following query:

SELECT * FROM

(SELECT ROW_NUMBER() OVER(ORDER BY [Name]) AS RowNum,

[Name], [SKU], [ManufacturerPartNumber] FROM Product) AS Product1

WHERE RowNum >= 5 AND RowNum <= 10

With Nested Queries

There is one another solution that consists only of 1 sql-statement, and so is efficient and quick, even with large databases. The good news is : it will work ! Always ! it’s looking rather a bit complicated; but its not!

Let us see the Example directly to limit our results to records 6 through 10,

SELECT P.[Name], P.[SKU], P.[ManufacturerPartNumber] FROM (

SELECT TOP 5 [ProductID],[Name] FROM (

SELECT TOP 10 [ProductID],[Name]

FROM Product

ORDER BY Product.[Name] ASC) as Product1

ORDER by Product1.[Name] DESC) as Product2

INNER JOIN Product P ON Product2.ProductID = P.ProductID

ORDER by Product2.[Name] ASC

Where 5 is the number of rows you want to retrieve and 10 is the offset.

The innermost Sql Statement will fetch 10 records (1 to 10) Order by Product Name in Ascending Order.

This means that if you have a table with 10,000 records, and you want to have the last 10 records, the innermost SELECT statement will indeed retrieve 10,000 records. This of course has it’s ramifications on performance, but since it’s only the primary key fields (which are indexed) and optionally some sort fields, the impact will be minimal.

The middle Sql Statement will fetch 5 records (10 to 6) Order by Product Name in Descending Order. And Finally the Outer Sql Stateement is simply select statement that will fetch the result in Ascending Order (6 to 10). In Innermost and Midddle Sql Statement only primary key fields or the fields which are indexed are used.

After looking at this in Query Analyzer, it appears that the extra nesting does not add very much to the load. The SQL statement plays with the sorting orders to limit the records, thus resulting in paged recordsets. This means that there should be at least ONE field to be sorted. If there isn’t any, sort on the primary key field(s) !


Bookmark and Share

Using TRY-CATCH to Rollback a Transaction in the Face of an Error

Visit : https://zonixsoft.com (our official website)

Error Handing in Sql Server 2005.
The release of Sql Server 2005 has provided us somany features over its predecessor. No doubt that more preference is given to the tasks performed by the administrator. But there are some new development features added to make your Sql Code more powerful and error resistance, specially; Stored Procedures.
The Feature – “TRY…CATCH”
The Most impressive functionality improvement added for developers is “Exceptional Handling” technique. There is no beneficial reason if you are not writing your code in “Try.. Catch” block.
• A TRY Block – the TRY block contains the code / script that might cause an exception
• A CATCH Block – if an exception occurs from one of the statements in the TRY block, control is branched to the CATCH block, where the exception can be handled, logged, and so on.
Checking @@ERROR – the “sql 2000” Way of Handling Errors in Stored Procedure
Just have a look at below Store Procedure example.
CREATE PROC usp_AccountTransaction
@AccountNum INT,
@Amount DECIMAL
AS
BEGIN
BEGIN TRANSACTION –beginning a transaction..
UPDATE MyChecking SET Amount = Amount – @Amount
WHERE AccountNum = @AccountNum
IF @@ERROR != 0 –check @@ERROR variable after each DML statements..
BEGIN
ROLLBACK TRANSACTION –RollBack Transaction if Error..
RETURN
END
ELSE
BEGIN
UPDATE MySavings SET Amount = Amount + @Amount
WHERE AccountNum = @AccountNum
IF @@ERROR != 0 –check @@ERROR variable after each DML statements..
BEGIN
ROLLBACK TRANSACTION –RollBack Transaction if Error..
RETURN
END
ELSE
BEGIN
COMMIT TRANSACTION –finally, Commit the transaction if Success..
RETURN
END
END
END
GO
Yes!.. This is what we used to code a Stored Procedure in Sql 2000; Check for @@ERROR after every DML (Data Manipulation) Statements and Commit / RollBack the transaction.
While working with SQL Server 2000, detecting errors could only be handled by checking a global error variable, @@ERROR. Because the @@ERROR variable value is reset after each SQL statement, this leads to rather bloated stored procedures, as the variable must be checked after each statement with code to handle any problems.
The TRY…CATCH block in SQL Server 2005 offers a much more readable syntax and one that developers are more familiar with. And yes, SQL Server 2005 still supports to @@ERROR Approach. In this article we’ll look at the new TRY…CATCH block and examine how it can be used to rollback a transaction in the face of an error. Lets move on to it!
Handling Errors With SQL Server 2005’s TRY…CATCH Blocks
In Fact, there is really nothing new to be describe and discuss on TRY…CATCH Block; as we all know with any programming languages, TRY…CATCH block executes a number of statements in the TRY block. If there are no errors in any of the statements, control proceeds to after the CATCH block. If, however, one of the statements causes an error, control branches immediately to the start of the CATCH block.
Basic Syntax is,
BEGIN TRY
Try Statement 1
Try Statement 2

Try Statement M
END TRY
BEGIN CATCH
Catch Statement 1
Catch Statement 2

Catch Statement N
END CATCH

The following system functions are available in the CATCH block and can be used to determine additional error information:
Function                           Description
ERROR_NUMBER()             Returns the number of the error.
ERROR_SEVERITY()           Returns the severity.
ERROR_STATE()                 Returns the error state number.
ERROR_PROCEDURE()       Returns the name of the stored procedure  where the error occurred.
ERROR_LINE()                   Returns the line number inside the routine that caused the error.
ERROR_MESSAGE()            Returns the complete text of the error message.

Take a look at below example,
BEGIN TRY
SELECT GETDATE()
SELECT 1/0–Evergreen divide by zero example!
END TRY
BEGIN CATCH
SELECT ‘There was an error! ‘ + ERROR_MESSAGE()
RETURN
END CATCH;
Using TRY…CATCH to Rollback a Transaction in the Face of an Error
As you saw in earlier example, one of the downsides of the @@ERROR variable approach is that to implement Transaction; we must check this variable after each and every DML SQL statement to determine if an error occurred and, if so, to rollback the transaction. With SQL Server 2005’s TRY…CATCH block, however, these types of scripts are greatly simplified.
Lets Alter the Previous Example!
ALTER PROC usp_AccountTransaction
@AccountNum INT,
@Amount DECIMAL
AS
BEGIN
BEGIN TRY –Start the Try Block..
BEGIN TRANSACTION — Start the transaction..
UPDATE MyChecking SET Amount = Amount – @Amount
WHERE AccountNum = @AccountNum
UPDATE MySavings SET Amount = Amount + @Amount
WHERE AccountNum = @AccountNum
COMMIT TRAN — Transaction Success!
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRAN –RollBack in case of Error
— you can Raise ERROR with RAISEERROR() Statement including the details of the exception
RAISERROR(ERROR_MESSAGE(), ERROR_SEVERITY(), 1)
END CATCH
END
GO
Just look at the simplicity and line of code than previous example!
In the TRY block a transaction is started and the two UPDATE statements are performed. If both UPDATEs succeed, the COMMIT will be reached and the transaction committed. If, however, either one produces an error, control will be execute CATCH block where the transaction will be rolled back.
Also, you can “re-raises” the error (using RAISERROR) so that the error information will be passed up to your .Net application from where you are calling the Stored Procedure, in case if you want to use the error information to process further steps anyhow.


Bookmark and Share

Passing lists to SQL server stored procedures

Visit : https://zonixsoft.com (our official wbsite)

Article is about:

The ability to pass “a list of values” from .Net as a parameter to a T-SQL based stored procedure.

Scenarios:

There are lots of scenarios where we need to pass a list of values to save in database. Here’s a couple of obvious ones:

· INSERT a list of values into the database in one “chunky” call (e.g. some IDs from a CheckBoxList)

· SELECT rows where IDs are IN (<list of IDs>)

Some general Approaches:

Taking the INSERT statements as an example, there are various general approaches that we adopt to achieve the desired result:

· Use dynamic / Inline SQL!  But ideally say, dynamic / Inline SQL is rarely the ideal solution for obvious reasons.

· Make a stored proc call for each ID to insert. This is the most common approach we see in various projects, mainly because it is the easiest to implement. The drawback of course is if we were to insert 60 values, it would result in 60 “chatty” calls to the database.

· Pass comma separated values via a VARCHAR (or similar) parameter. This works fine but has messy “string splitting” in the stored procedure to extract the IDs and then build the SQL statement in the procedure itself. Prone to SQL injection and not the best performance.

· Pass the list as an XML parameter. This is nicer and is the point of this article.

Coming to the main Point, Using XML:

Using XML for “list passing” has a number of benefits, in particular the ability to pass lists of more “complex types” rather than just single values.

Lets take an example. Suppose we are having 2 CheckedListBox; one is list of Users and another is the list of tasks / roles that can be assigned to Users. We want to store the values in Table which is having Fields UserID and TaskID. The Stored Procedure will accept Parameter with XML Datatype as,

CREATE PROCEDURE [dbo].[usp_InsertUserTask]
@UserTaskXML XML
AS
BEGIN
INSERT INTO UserTasks (UserID,TaskID)
SELECT
UserTaskTab.UserTaskCol.value(‘UserID[1]’,’int’) AS UserID,
UserTaskTab.UserTaskCol.value(‘TaskID[1]’,’int’) AS TaskID
FROM @UserTaskXML.nodes(‘//UserTaskList/UserTaskData’) AS UserTaskTab(UserTaskCol)
END

To call this in Stored Procedure, you would have something like this:

EXEC    [dbo].[usp_InsertUserTask]
@UserTaskXML = ‘<UserTaskList>
<UserTaskData>
<UserID>1</UserID>
<TaskID>100</TaskID>
</UserTaskData>
<UserTaskData>
<UserID>2</UserID>
<TaskID>200</TaskID>
</UserTaskData>
</UserTaskList>’

In your application, your C# calling code could be:

SqlConnection sqlCN = new SqlConnection();
sqlCN.ConnectionString = ConfigurationManager.AppSettings[“DBConn”].ToString();
string strQuery = “usp_InsertUserTask”;
SqlParameter[] sqlParams = new SqlParameter[1];
sqlParams[0] = new SqlParameter(“@UserTaskXML”, GetStudyDataXMLString());
SqlHelper.ExecuteNonQuery(sqlCN, CommandType.StoredProcedure, strQuery, sqlParams);
if (sqlCN.State == ConnectionState.Open)
sqlCN.Close();
sqlCN.Dispose();

which calls the method below to translate the UserID and TaskID from CheckBoxLists into an XML String:

private string GetUserTaskListXML()
{
try
{
StringBuilder XMLString = new StringBuilder();
XMLString.AppendFormat(“<UserTaskList>”);
for (int iUserCount = 0; iUserCount < UserCheckBoxList.Items.Count; iUserCount++)
{
if(UserCheckBoxList.Items[iUserCount].Selected)
{
for (int iTaskCount = 0; iTaskCount < TaskCheckBoxList.Items.Count; iTaskCount++)
{
if(TaskCheckBoxList.Items[iTaskCount].Selected)
{
XMLString.AppendFormat(“<UserTaskData>”);
XMLString.AppendFormat(“<UserID>{0}</UserID>”, UserCheckBoxList.Items[iUserCount].value);
XMLString.AppendFormat(“<TaskID>{0}</TaskID>”, UserCheckBoxList.Items[iUserCount].value);
XMLString.AppendFormat(“</UserTaskData>”);
}
}
}
}
XMLString.AppendFormat(“</UserTaskList>”);
}
catch (Exception Ex)
{
throw Ex;
}
return XMLString.ToString();
}

Here, StringBuilder is used for the xml concatenation as in this case I think it fits the bill but purists might prefer an XmlTextWriter approach. In summary, it performs very well and is adaptable for various lists of objects and more complex structures.


Bookmark and Share

Passing lists to SQL server stored procedures

Visit : https://zonixsoft.com (our official wbsite)

Article is about:

The ability to pass “a list of values” from .Net as a parameter to a T-SQL based stored procedure.

Scenarios:

There are lots of scenarios where we need to pass a list of values to save in database. Here’s a couple of obvious ones:

· INSERT a list of values into the database in one “chunky” call (e.g. some IDs from a CheckBoxList)

· SELECT rows where IDs are IN (<list of IDs>)

Some general Approaches:

Taking the INSERT statements as an example, there are various general approaches that we adopt to achieve the desired result:

· Use dynamic / Inline SQL!  But ideally say, dynamic / Inline SQL is rarely the ideal solution for obvious reasons.

· Make a stored proc call for each ID to insert. This is the most common approach we see in various projects, mainly because it is the easiest to implement. The drawback of course is if we were to insert 60 values, it would result in 60 “chatty” calls to the database.

· Pass comma separated values via a VARCHAR (or similar) parameter. This works fine but has messy “string splitting” in the stored procedure to extract the IDs and then build the SQL statement in the procedure itself. Prone to SQL injection and not the best performance.

· Pass the list as an XML parameter. This is nicer and is the point of this article.

Coming to the main Point, Using XML:

Using XML for “list passing” has a number of benefits, in particular the ability to pass lists of more “complex types” rather than just single values.

Lets take an example. Suppose we are having 2 CheckedListBox; one is list of Users and another is the list of tasks / roles that can be assigned to Users. We want to store the values in Table which is having Fields UserID and TaskID. The Stored Procedure will accept Parameter with XML Datatype as,

CREATE PROCEDURE [dbo].[usp_InsertUserTask]
@UserTaskXML XML
AS
BEGIN
INSERT INTO UserTasks (UserID,TaskID)
SELECT
UserTaskTab.UserTaskCol.value(‘UserID[1]’,’int’) AS UserID,
UserTaskTab.UserTaskCol.value(‘TaskID[1]’,’int’) AS TaskID
FROM @UserTaskXML.nodes(‘//UserTaskList/UserTaskData’) AS UserTaskTab(UserTaskCol)
END

To call this in Stored Procedure, you would have something like this:

EXEC    [dbo].[usp_InsertUserTask]
@UserTaskXML = ‘<UserTaskList>
<UserTaskData>
<UserID>1</UserID>
<TaskID>100</TaskID>
</UserTaskData>
<UserTaskData>
<UserID>2</UserID>
<TaskID>200</TaskID>
</UserTaskData>
</UserTaskList>’

In your application, your C# calling code could be:

SqlConnection sqlCN = new SqlConnection();
sqlCN.ConnectionString = ConfigurationManager.AppSettings[“DBConn”].ToString();
string strQuery = “usp_InsertUserTask”;
SqlParameter[] sqlParams = new SqlParameter[1];
sqlParams[0] = new SqlParameter(“@UserTaskXML”, GetStudyDataXMLString());
SqlHelper.ExecuteNonQuery(sqlCN, CommandType.StoredProcedure, strQuery, sqlParams);
if (sqlCN.State == ConnectionState.Open)
sqlCN.Close();
sqlCN.Dispose();

which calls the method below to translate the UserID and TaskID from CheckBoxLists into an XML String:

private string GetUserTaskListXML()
{
try
{
StringBuilder XMLString = new StringBuilder();
XMLString.AppendFormat(“<UserTaskList>”);
for (int iUserCount = 0; iUserCount < UserCheckBoxList.Items.Count; iUserCount++)
{
if(UserCheckBoxList.Items[iUserCount].Selected)
{
for (int iTaskCount = 0; iTaskCount < TaskCheckBoxList.Items.Count; iTaskCount++)
{
if(TaskCheckBoxList.Items[iTaskCount].Selected)
{
XMLString.AppendFormat(“<UserTaskData>”);
XMLString.AppendFormat(“<UserID>{0}</UserID>”, UserCheckBoxList.Items[iUserCount].value);
XMLString.AppendFormat(“<TaskID>{0}</TaskID>”, UserCheckBoxList.Items[iUserCount].value);
XMLString.AppendFormat(“</UserTaskData>”);
}
}
}
}
XMLString.AppendFormat(“</UserTaskList>”);
}
catch (Exception Ex)
{
throw Ex;
}
return XMLString.ToString();
}

Here, StringBuilder is used for the xml concatenation as in this case I think it fits the bill but purists might prefer an XmlTextWriter approach. In summary, it performs very well and is adaptable for various lists of objects and more complex structures.


Bookmark and Share

Web Parts

Visit: https://zonixsoft.com

This article discuss about the Web Parts, new feature introduced in ASP.NET 2.0. Web Parts are objects which the end user can open, close or move from one zone of the page to another. Web Parts allows for personalization of page content. They allows users to move or hide the Web Parts and add new Web Parts changing the page layout.

Web Parts Modes

Modes are very powerful in that they enable user to edit Web Parts, delete the Web Parts or customize Web Parts.

m    a) Normal mode: End user cannot edit or move sections of page. Simple Browser mode.

m    b) Edit Mode: End user can edit Web Parts on the page including Web Parts title, setting custom properties.

m    c) Design Mode: End user can rearrange the order of the page Web Parts in a WebPartZone.

m    d) Catalog Mode: End user enjoys the choice to add new Web Parts in any WebPartZone on the page.

Web Part Manager:-

Web Part Manager control is server control that completely manages the state of the zones . This control doesn’t have any visual interface,. You can have only one WebPartManager for each page that works with Portal Framework.

Web Part Zone:-

You can declare each web zone in one of two ways. You can use the <asp:WebPartZone> element directly in the code, or you can create the zones within the table by dragging and dropping Web Part Zone controls onto the design surface. You can place anything in zones including HTML elements, web server controls, user controls and custom controls. Any thing placed into WebPartZone can be treated as Web Part. Useful attributes of WebPartZone include LayoutOrientation attribute which controls the display of items either horizontally or vertically.

Catalog Zone:-(To add new Web Part)—-Catalog Mode

The ASP.NET 2.0 Portal Framework enables an end user to add Web Parts, but you must also provide the end user with a list of items he can add. It is designed to allow for categorization of the items that can be placed on the page. Catalog Zone is also a template control. The Catalog Zone control contains a title and checkbox list of items that can be selected. The Catalog Zone control also includes a drop down list of all available Web Part Zones on the page. From here, you can place the selected Web Parts into one of the Web Part Zones available from the list.

CatalogZone Contains CatalogPart controls like DeclarativeCatalogPart, PageCatalogPart, and ImportCatalogPart.

·       PageCatalogPart: Provides a page catalog of Web Part controls that a user has closed on a Web Parts page, and that the user can add back to the page.

·       DeclarativeCatalogPart: Enables developers to add a catalog of Web Part controls to a Web page so that users can choose whether to add them to a page.

·       ImportCatalogPart: Imports Web Parts controls, so that the control can be added to a web page with pre-assigned settings.

Changing mode of page:-

You can use either WebPartManager class directly or through the use of WebPartManager server control, you can change the mode of page. Changing the mode allows the user to make changes to pages they are working with. All the changes (mode changes) are recorded to ASPNETDB.MDF database associated with app_data directory created exclusively for Web Parts. Using WebPartManager object, you can add new Web Parts to the page. It also enables end user to drag and drop elements around the page.

Moving Web Parts—Design Mode

We can also move WebParts from one zone to another zone. This is possible through Design mode. To move any control just hover mouse over title of the control and you can see crosshair mouse symbol. Click the left mouse button and hold the Web Part and drag it to any WebPartZone. While dragging, the control it becomes transparent and drops the control in WebPartZone.

Editing the Web Parts—Editing Mode

Another Web Part mode that allows end-user to edit the Web Parts is Edit mode. This mode enables users to modify the settings related to behavior, appearance and layout for a particular Web Part on the page. When the user change the mode to edit you can see Appearance Editor/Layout Editor appear in the EditorZone. Appearance section allows users to change title and how the title of Web Part appears. Layout section enables user to change the order in which Web Parts appears in a zone or move Web Parts from one zone to another. Behavior section enables site editors to change dynamics of how end user can modify Web Parts.

Connection Between Webparts:-

Web parts are also capable of exchanging data between them, using web part connections. Using connections, you can have one web part provide one or more property values that can be used by other web parts on the page.

A WebPart Connection is a mechanism for sharing or transferring data from one Web Part (called the provider) to another Web Part (called the consumer). it is the ability to expose an interface to a WebPart (Provider) that another WebPart (Consumer) can connect to and use it

·       Connection types

Provider :-
– Control that provides data information
– Implements a provider connection point
– Defines a call back that returns an instance of the interface
– One provider connection point can connect to any number of consumer connection points of the same type

Consumer :-
– Control that gets data
– Implements a consumer connection point
– Defines a call back that gets an instance of the interface return by provider
– One consumer connection point can connect to only one provider connection points of the same type

·       Connection To establish a communication channel between provider and consumer WebParts so that they can exchange required information as defined in communication contract. A connection is establish between two connection points.The ConnectionPoint base class defines an object that is associated with a consumer or provider and contains the details necessary to exchange data. The ProviderConnectionPoint is associated with the provider, and the ConsumerConnectionPoint is associated with the consumer.

you must specify the following required attributes in addition to the id and runat attributes:

·       ConsumerID – Indicates the ID of the consumer control in the connection.

·       ConsumerConnectionPointID – Indicates the ID of a special callback method in the consumer used to establish the connection. This attribute is required only if the consumer has more than one connection point. For details on connection points, see ConnectionPoint.

·       ProviderID – Indicates the ID of the provider control in the connection.

·       ProviderConnectionPointID – Indicates the ID of a special callback method in the provider used to establish the connection. This attribute is required only if the provider has more than one connection point.

Courtesy: The Seo Guru, A Software Development Company, Best OOPS Blog Site, Link Submission, Thanks to Shopping  Site for Link Exchanging


Bookmark and Share

New in C-Sharp 3.0

Visit: https://zonixsoft.com

New in C-Sharp 3.0

New in C# 3.0
This article discusses the following major new enhancements expected in C# 3.0:
• Implicitly typed local variables
• Anonymous types
• Extension methods
• Object and collection initializers
• Lambda expressions
• Query expressions
• Expression Trees

Implicitly typed local variables:
C# 3.0 introduces a new keyword called “var”. Var allows you to declare a new variable, whose type is implicitly inferred from the expression used to initialize the variable.
Syntax: var i=1;
The preceding line initializes the variable i to value 1 and gives it the type of integer. It is not an object or variant.
Anonymous types:
C# 3.0 gives you the flexibility to create an instance of a class without having to write code for the class beforehand. So, you now can write code as shown below:
new {StudentID=1, StudentName=”XYZ”, Marks=90}
The preceding line of code, with the help of the “new” keyword, gives you a new type that has three properties: StudentID, StudentName, and Marks. Behind the scenes, the C# compiler would create a class that looks as follows:
class __Anonymous1
{
private int _StudentID = 1;
private string _StudentName = “XYZ”;
private int _Marks = 64;
public int StudentID {get { return _StudentID; } set { _StudentID = value; }}
public string StudentName {get { return _StudentName; } set { _StudentName = value; }}
public int Marks {get { return _Marks; } set { _Marks = value; }}
}

Extension methods:

Extension methods enable you to extend various types with additional static
methods. Extension methods can be declared only in static classes and are identified by
the keyword “this” as a modifier on the first parameter of the method.

The following is an example of a valid extension method:
public static int ToInt32(this string s)
{
return Convert.ToInt32(s) ;
}
Object and collection initializers:
C# 3.0 is expected to allow you to include an initializer that specifies the initial values of the members of a newly created object or collection. This enables you to combine declaration and initialization in one step.
For instance, if you defined a CoOrdinate class as follows:
public class CoOrdinate
{
public int x;
public int y;
}

You then could declare and initialize a CoOrdinate object using an object initializer, like this:

var myCoOrd = new CoOrdinate{ x = 0, y= 0} ;
You should easily be able to give values to collections in a rather concise and compact manner in C# 3.0.
In C# 2.0 Code we write :

List<string> animals = new List<string>();

animals.Add(“monkey”);
animals.Add(“donkey”);
animals.Add(“cow”);
animals.Add(“dog”);
animals.Add(“cat”);

It can be written in  C# 3.0 shortened to simply.
List<string> animals = new List<string> {“monkey”, “donkey”, “cow”, “dog”, “cat” } ;
Lambda expressions:
C#(1.x) allowed you to write code blocks in methods, which you could invoke easily using delegates.Delegates are definitely useful, and they are used throughout the framework, but in many instances you had to declare a method or a class just to use one. Thus, to give you an easier and more concise way of writing code, C# 2.0 allowed you to replace standard calls to delegates with anonymous methods.
In C# 2.0, using anonymous methods, you could rewrite the code as follows:
class Program
{
delegate void DemoDelegate();
static void Main(string[] args)
{
DemoDelegate myDelegate = delegate()
{
Console.Writeline(“Hiya!!”);
};
myDelegate();
}
}
The above code can now be replaced with the following code in C# 3.0:
class Program
{
delegate void DemoDelegate();
static void Main(string[] args)
{
DemoDelegate myDelegate = () => Console.WriteLine(“Hiya!!”) ;
myDelegate();
}
}

Reference Site: http://www.developer.com/net/csharp/article.php/10918_3561756_1


Bookmark and Share

Threading

Visit: https://zonixsoft.com

C# supports parallel execution of code through multithreading. A thread is an independent execution path, able to run simultaneously with other threads.A C# program starts in a single thread created automatically by the CLR and operating system (the “main” thread), and is made multi-threaded by creating additional threads.The CLR assigns each thread its own memory stack so that local variables are kept separate.

Threading enables your C# program to perform concurrent processing so you can do more than one operation at a time. The System.Threading namespace provides classes and interfaces that support multithreaded programming and enable you to easily perform tasks such as creating and starting new threads, synchronizing multiple threads, suspending threads, and aborting threads. The advantage of threading is the ability to create applications that use more than one thread of execution.

How Threading Works

Multithreading is managed internally by a thread scheduler, a function the CLR typically delegates to the operating system. A thread scheduler ensures all active threads are allocated appropriate execution time, and that threads that are waiting or blocked – for instance – on an exclusive lock, or on user input – do not consume CPU time.

On a single-processor computer, a thread scheduler performs time-slicing – rapidly switching execution between each of the active threads.

On a multi-processor computer, multithreading is implemented with a mixture of time-slicing and genuine concurrency – where different threads run code simultaneously on different CPUs. It’s almost certain there will still be some time-slicing, because of the operating system’s need to service its own threads – as well as those of other applications.

Killing a Thread:

We can kill a thread by calling the

 

Abort method of the thread.

MyThread.Abort();

Suspend and Resuming Thread:

We can suspend the execution of a thread and once again start its execution from another thread using the Thread object’s Suspend and Resume methods.

  MyThread.Suspend() // causes suspend the Thread Execution.

 

  MyThread.Resume() // causes the suspended Thread to resume its execution.

 

 

Creating and Starting Threads

Threads are created using the Thread class’s constructor, passing in a ThreadStart delegate – indicating the method where execution should begin.  Here’s how the ThreadStart delegate is defined:

public delegate void ThreadStart();

Calling Start on the thread then sets it running. The thread continues until its method returns, at which point the thread ends. Here’s an example, using the expanded C# syntax for creating a TheadStart delegate:

class ThreadTest {

  static void Main() {

    Thread t = new Thread (new ThreadStart (Go));

    t.Start();   // Run Go() on the new thread.

    ……

  }

  static void Go() { …..}

A thread can be created more conveniently using C#’s shortcut syntax for instantiating delegates:

static void Main() {

  Thread t = new Thread (Go);    // No need to explicitly use ThreadStart

  t.Start();

 

}

static void Go() { … }

In this case, a ThreadStart delegate is inferred automatically by the compiler. Another shortcut is to use an anonymous method to start the thread:

static void Main() {

  Thread t = new Thread (delegate() { Console.WriteLine (“Hello!”); });

  t.Start();

}

A thread has an IsAlive property that returns true after its Start() method has been called, up until the thread ends.A thread, once ended, cannot be re-started.