Archives for category: Entity Framework

Let’s face it.  There are times when a stored procedure is going to create a more efficient query, and provide the flexibility you need to return attributes from multiple SQL tables in a non-traditional way.

In my case, I had a number of different log tables that I wanted to combine into a single result.  My log tables contained different field attributes, and so I was not able to simply use the ObjectQuery<T>.Union method to combine my results.  It was much easier to simply create a stored procedure that could translate the information into a nice generic result from the combined tables.

It was this result that I wanted to work with, not the individual tables.  You can accomplish this in three easy steps:

1) Update your Model diagram (edmx) with the stored procedure you want to add.  (Update Model from Database –> Add –> Stored Procedure)

step1

2) Right-click on your diagram and choose Add –> Function Import.  Enter the name you want to use as the Entity function to retrieve the stored procedure.  Select the stored procedure you want to associate with this Entity Function.  Press Get Column Information.  Then select Create New Complex Type.  Enter the name of the custom Complex Type Entity you want to associate with the stored procedure.

step2

3) That’s it!  Now you can use the function in a very elegant way and still have a strongly typed object to work with.

step3

Advertisements

I have an auto-generated entity framework Entity Data Model Designer File (edmx).  And while I am using Data Annotations for validation purposes, there are times when retrieving the maxlength of an entity property programmatically would be very useful.

image

In some cases, I may want to simply truncate the value, rather than throw an error if the maxlength of a property is exceeded.  Or I may want to surface the maxlength property to the client for client-side validation purposes.

It turns out to be a simple query, but it took me a while to find it.  You will need to reference the following libraries:

using System.Data.Objects.DataClasses;
using System.Data.Metadata.Edm;

This is the method to retrieve the MaxLength property.  Please note that you may get an exception if the property does not have a MaxLength property specified.

public static int? GetMaxLength(this EntityObject entityObject, string entityProperty)
{
CATDBEntities _context = new CATDBEntities();
int? result = null;
using (_context)
{
var q = from meta in _context.MetadataWorkspace.GetItems(DataSpace.CSpace)
.Where(m => m.BuiltInTypeKind == BuiltInTypeKind.EntityType)
from p in (meta as EntityType).Properties
.Where(p => p.DeclaringType.Name == entityObject.GetType().Name
&& p.Name == entityProperty
&& p.TypeUsage.EdmType.Name == “String”)
select p;

    var queryResult = from meta in _context.MetadataWorkspace.GetItems(DataSpace.CSpace)
.Where(m => m.BuiltInTypeKind == BuiltInTypeKind.EntityType)
from p in (meta as EntityType).Properties
.Where(p => p.DeclaringType.Name == entityObject.GetType().Name
&& p.Name == entityProperty
&& p.TypeUsage.EdmType.Name == “String”)
select p.TypeUsage.Facets[“MaxLength”].Value;
if (queryResult.Count() > 0)
{
result = Convert.ToInt32(queryResult.First());
}
}
return result;
}

To call this method, simply instantiate the entity object and pass in the proper parameters:

project _project = new project();
int? maxLength = DataAnnotation.GetMaxLength(_project, “project_name”);

There are two basic techniques for updating one table with information from another, in this example, we are updating an OLAP Dimension table with the results from an OLTP table.
 
Flush and Fill Technique
The first and easiest method is called the “flush and fill” technique. Basically, you remove all of the data from the Dimension table and fill it with data from an OLTP table. This is the simplest method by far, but can be extremely inefficient if we’re talking about tables with a lot of rows. The only thing you need to decide here is whether you are going to use the DELETE command or the TRUNCATE command.
 
DELETE COMMAND
Did you know that if you run the following command…
 
DELETE FROM [dbo].[DimDeliverable]
 
…and the CUSTOMER table contains 5,000, that the command will be executed 5,000 times, once for each row, because it deletes them one-by-one? Also, if you use the DELETE command, your IDENTITY key does not automatically RESEED itself. You would also need to explicitly reseed the identity field…
DBCCCHECKIDENT(‘DimDeliverable’,RESEED)
…or use this statement to reseed the identity field to start at a specific value.
DBCC CHECKIDENT (‘DimDeliverable’, 1)
 
TRUNCATE COMMAND
An alternative to the delete command is the TRUNCATE command. The nice thing about the truncate command is that it is executed only once, so it is much more efficient and it also automatically reseeds your identity key. How does it work? Well, the truncate command deallocates data pages which internally store the data in SQL Server. One thing to note, however, is that while the truncate command is the quickest way to clear out a SQL table, it cannot be used on a table with foreign key constraints. You would first need to remove the constraints as shown in the following example:
 
— temporarily remove constraint
ALTERTABLE[dbo].[DimDeliverable]DROPCONSTRAINT[PK_DimDeliverable]
 
— truncate table
TRUNCATETABLE[dbo].[DimDeliverable]
 
— re-add constraint
ALTER TABLE [dbo].[DimDeliverable] ADD CONSTRAINT [PK_DimDeliverable] PRIMARY KEY ([DeliverableKey])
 
Incremental Loading Technique
This technique is much more efficient than the “flush and fill” technique in that it only updates the items which have changed. While it is more efficient, it is definitely not simpler, so you will probably use the “flush and fill” technique on your smaller dimension tables. As of SQL 2008, a new command was introduced to support the “incremental loading” technique called MERGE.
Type 1:  Updates are overwritten and history of changes is not kept. 
This statement does the following: 1) if a record does not exist, create it; 2) if a record exists but is different, update it; 3) if a record exists in the target but not in the source, then deactivate it.
 
use CatapultDatawarehouse
Go
— Declare Variables
DECLARE @BC int = 570
— TYPE 1 (Overwrites history) – DimDeliverable:  CATINDB06.CAT_DB.project_deliverable
BEGIN TRY — perform in transaction so if it fails values are rolled back
      MERGE DimDeliverable AS [Target] — begin merge statements (merge statements end with a semi-colon)
      USING CATINDB06.CAT_DB.project_deliverable AS [Source]
      ON[Target].[DeliverableId]=[Source].[deliverable_id]
      WHEN MATCHED AND — record exists but values are different
      (
            [Target].[Name] <> [Source].[deliverable_name]
            OR [Target].[EstimatedHours] <> [Source].[est_hours]
            OR [Target].[EstimatedRate] <> [Source].[est_billing_rate]
            OR [Target].[EstimatedRevenue] <> [Source].[total_amount]
      )
      THEN UPDATE SET — update records (Type 1 means record values are overwritten)
            [Target].[Name] = [Source].[deliverable_name]
            , [Target].[EstimatedHours] = [Source].[est_hours]
            , [Target].[EstimatedRate] = [Source].[est_billing_rate]
            , [Target].[EstimatedRevenue] = [Source].[total_amount]
      WHEN NOT MATCHED BY TARGET — record does not exist
      THEN INSERT — insert record
      (
            [Name]
            ,[EstimatedHours]
            ,[EstimatedRate]
            ,[EstimatedRevenue]
      )
      VALUES
      (
            [Source].[deliverable_name]
            , [Source].[est_hours]
            , [Source].[est_billing_rate]
            , [Source].[total_amount]
      )
      WHEN NOT MATCHED BY SOURCE — record exists in target but not source
      THEN DELETE — delete from target
      OUTPUT $action, inserted.*, deleted.*; — output results
      COMMIT TRAN
END TRY
BEGIN CATCH
      ROLLBACK TRAN
ENDCATCH
Go
 
Type 2:  Slowly Changing Dimensions track historical changes. 
In this case the merge statement is simply checking to see if a row exists but contains changes.  If this is the case then the merge statement simply deactivates that row but keeps its data intact for historical purposes.  A new row then is created and marked as the current (most up-to-date) information using an INSERT statement that is wrapped around the SQL MERGE.
 
— TYPE 2 (Tracks history) – DimDeliverable:  CATINDB06.CAT_DB.project_deliverable
BEGIN TRY — perform in transaction so if it fails values are rolled back
      INSERT INTO DimDeliverable
      (
            [Name]
            , [EstimatedHours]
            , [EstimatedRate]
            , [EstimatedRevenue]
            , [IsCurrentRow]
      )
      SELECT
            [deliverable_name]
            , [est_hours]
            , [est_billing_rate]
            , [total_amount]
            , 1
      FROM
      (
            MERGE DimDeliverable AS [Target] — begin merge statements (merge statements end with a semi-colon)
            USING CATINDB06.CAT_DB.project_deliverable AS [Source]
            ON [Target].[DeliverableId] = [Source].[deliverable_id] AND [Target].[IsCurrentRow] = 1
            WHEN MATCHED AND — record exists but values are different
            (
                  [Target].[Name] <> [Source].[deliverable_name]
                  OR [Target].[EstimatedHours] <> [Source].[est_hours]
                  OR [Target].[EstimatedRate] <> [Source].[est_billing_rate]
                  OR [Target].[EstimatedRevenue] <> [Source].[total_amount]
            )
            THEN UPDATE SET — update records (Type 1 means record values are overwritten)
                  [Target].[IsCurrentRow] = 0
                  , [Target].[ValidTo] = GETDATE()
            WHEN NOT MATCHED BY TARGET — record does not exist
            THEN INSERT — insert record
            (
                  [Name]
                  ,[EstimatedHours]
                  ,[EstimatedRate]
                  ,[EstimatedRevenue]
                  , [IsCurrentRow]
            )
            VALUES
            (
                  [Source].[deliverable_name]
                  , [Source].[est_hours]
                  , [Source].[est_billing_rate]
                  , [Source].[total_amount]
                  , 1
            )
            WHEN NOT MATCHED BY SOURCE — record exists in target but not source
            THEN DELETE — delete from target
            OUTPUT $action AS Action, [Source].* — output results
      ) AS MergeOutput
      WHERE
            MergeOutput.Action = ‘UPDATE’
      ;
      COMMIT TRAN
END TRY
BEGIN CATCH
      ROLLBACK TRAN
ENDCATCH
Go
 

I love using the XrmServiceContext Caching issue in order to use LINQ queries to retrieve items from our CRM implementation. So much nicer to work with than the old SOAP model!!!

However, I ran into an issue pretty quickly with a custom service I created that provisions a SharePoint team site for a CRM opportunity. I was getting intermittent caching errors. The script’s attempt to retrieve the opportunity record from CRM would result in the following error:

Sequence contains no elements

I researched the issue and was unable to find a way to configure caching and even attempted to explicitly remove items from the cache, but the only thing that worked for me was to dispose and recreate the XrmServiceContext object as follows:

private void RefreshContext() {
if (context != null) {
context.Dispose();
}

ClientCredentials credentials = new ClientCredentials();
credentials.UserName.UserName = MscrmWebService.LoginName;
credentials.UserName.Password = MscrmWebService.Password;
string link = MscrmWebService.Link;
Uri organizationUri = new Uri(MscrmWebService.Link);
Uri homeRealmUri = null;

using (serviceProxy = new OrganizationServiceProxy(organizationUri, homeRealmUri, credentials, null)) {
serviceProxy.ServiceConfiguration.CurrentServiceEndpoint.Behaviors.Add(new ProxyTypesBehavior());
service = (IOrganizationService)serviceProxy;
context = new XrmServiceContext(service);
}
}

public Opportunity RetrieveOpportunity(Guid opportunityId) {
RefreshContext();
return context.OpportunitySet.Where(e => e.OpportunityId == opportunityId).Single();
}

I am creating a Flot Line Chart in .Net based on the following Entity Framework result set.

ef_distinct_sort_rcerda

I want to get a distinct list of the report categories in alphabetical order to use as a control group.

Entity Framework provides an easy Distinct() method to retrieve distinct values from a data set, but it wasn’t completely obvious to me how to make sure that it was given to me in ascending order.

Turns out, it’s pretty easy.

List<string> categories = reportSummary.Select(m => m.CategoryName).Distinct().ToList();

I am creating a report based off of the following result set:

ef-distinct-multi-column

I wanted to get a unique set of weeks from the control group, but I also wanted to retrieve whether the week is an actual vs. a forecast.

Turns out it’s pretty easy:

public List<WeeklyUtilizationHistoryAndForecast> ReportSummary = businessService.GetWeeklyUtilizationHistoryAndForecast(isCache, ParamUserId, ParamStartWeek, ParamEndWeek, ParamBusinessUnitId, ParamIncludeTimeOff);

public List<WeeklyUtilizationHistoryAndForecast> Weeks = ReportSummary.OrderBy(m => m.WeekDisplay).DistinctBy(m => new { m.WeekDisplay, m.IsActual }).ToList();