jump to navigation

More RavenDB Resources January 3, 2012

Posted by ActiveEngine Sensei in .Net Development, C#, New Techniques, Open Source, RavenDB.
Tags: , ,
3 comments

Daniel Lang has a great post regarding how to handle relations in RavenDB.  He emphasizes that a document database is vastly different from a relation database and illustrates various scenarios of do’s and don’ts.  Go read it now.

About these ads

ApprovaFlow: Create A Plugin System And Reduce Deployment Headaches June 29, 2011

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Approvaflow, ASP.Net, Problem Solving, Workflow.
Tags: , , , ,
2 comments

This is the fourth in a series of posts for ApprovaFlow, an alternative to Windows Workflow written in C# and JSON.Net.  Source code for this post is here.

Last Time on ApprovaFlow

In the previous post we discussed how the Pipe and Filter pattern facilitated a robust mechanism for executing tasks prior and after a transition is completed by the workflow state machine.  This accomplished our third goal and to date we have completed:

Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

•  Allow the state of a workflow to be peristed as an integer, string, etc.  Quickly fetch state of a workflow.  Discussed in Simple Workflows With ApprovaFlow and Stateless.

•  Create pre and post processing methods that can enforce enforce rules or carry out actions when completing a workflow task.  Discussed in ApprovaFlow:  Using the Pipe and Filter Pattern to Build a Workflow Processor

These goals remain:

• Introduce new functionality while isolating the impact of the new changes. New components should not break old ones

• Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.

• Use one. aspx page to processes user input for any type of workflow.

• Provide ability to roll your own customizations to the front end or backend of your application.

It’s the Small Changes After You Go Live That Upset You

The goal we’ll focus on next is Introduce new functionality while isolating the impact of the new changes. New components should not break old ones, as it’s the small upsetters that lurk around the corner that your users will think up that will keep you in the constant redeployment cycle. If we implement a plug-in system, then we can prevent the new features from breaking the current production system. Implementing these changes in isolation will lead to faster testing, validation and happier users.

We lucked out as our implementation of the Pipe And Filter pattern forced us to create objects with finite functionality.  If you recall each step in our workflow chain was implemented as a filter derived from FilterBase and this lends itself nicely to creating plug-ins.  The Pipe and Filter pattern forces us to have a filter for each unique action we wish to carry out.  To save data we have a SaveData filter, to validate that a user can supply a Trigger we have the ValidateUserTrigger, and so on.

“Great, Sensei, but aren’t we still constrained by the fact that we have to recompile and deploy any time we add new filters?  And, if I have to do that, why bother with the pattern in the first place?”

Well, we can easily reduce the need for re-deploying the application through the use of a plugin system where we read assemblies from a share and interrogate them by searching for a particular object type on application start up.  Each new feature will be a new filter.  This means you will be working with a small project that references ApprovaFlow to create new filters without disturbing the existing architecture.   We’ll also create a manifest of approved plug-ins so that we can control what is used and institute a little security since we wouldn’t want any plugin to be introduced surreptitiously.

Plug-in Implementation

The class FilterRegistry will perform the process of reading a share, fetching the object with type FilterBase, and register these components just like we do with our system components.  There are a few additions since the last version, as we now need to read and store the manifest for later comparison with the plug-ins.  The new method ReadManifest takes care of this new task:

private void ReadManifest()
{
  string manifestSource = ConfigurationManager.AppSettings["ManifestSource"].ToString();

  Enforce.That(string.IsNullOrEmpty(manifestSource) == false,
          "FilterRegistry.ReadManifest - ManifestSource can not be null");

  var fileInfo = new FileInfo(manifestSource);

  if (fileInfo.Exists == false)
  {
    throw new ApplicationException("RequestPromotion.Configure - File not found");
  }

  StreamReader sr = fileInfo.OpenText();
  string json = sr.ReadToEnd();
  sr.Close();

  this.approvedFilters = JsonConvert.DeserializeObject>>(json);
}

The manifest is merely a serialized list of FilterDefinitions. This is de-serialized into a list of approved filters.With the approved list the method LoadPlugin performs the action of reading the share and matching the FullName of the object type between the manifest entries and the methods in the assembly file:

public void LoadPlugIn(string source)
{
  Enforce.That(string.IsNullOrEmpty(source) == false,
             "PlugInLoader.Load - source can not be null");

  AppDomain appDomain = AppDomain.CurrentDomain;
  var assembly = Assembly.LoadFrom(source);

  var types = assembly.GetTypes().ToList();

  types.ForEach(type =>
  {
    var registerFilterDef = new FilterDefinition();

    //  Is type from assembly registered?
    registerFilterDef = this.approvedFilters.Where(app => app.TypeFullName == type.FullName)
                                   .SingleOrDefault();

    if (registerFilterDef != null)
    {
      object obj = Activator.CreateInstance(type);
      var filterDef = new FilterDefinition();
      filterDef.Name = obj.ToString();
      filterDef.FilterCategory = registerFilterDef.FilterCategory;
      filterDef.FilterType = type;
      filterDef.TypeFullName = type.FullName;
      filterDef.Filter = AddCreateFilter(filterDef);

      this.systemFilters.Add(filterDef);
     }
  });
}

That’s it. We can now control what assemblies are included in our plug-in system.  Later we’ll create a tool that will help us create the manifest so we do not have to managed it by hand.

What We Can Do with this New Functionality

Let’s turn to our sample workflow to see what possibilities we can develop.  The test CanPromoteRedShirtOffLandingParty from the class WorkflowScenarios displays the capability of our workflow.  First lets review our workflow scenario.  We have created a workflow for the Starship Enterprise to allow members of a landing party to request to be left out of the mission.  Basically there is only one way to get out of landing party duty and that is if Kirk says it’s okay.  Here are the workflow’s State, Trigger and Target State combinations:

State Trigger Target State
RequestPromotionForm Complete FirstOfficerReview
FirstOfficerReview RequestInfo RequestPromotionForm
FirstOfficerReview Deny PromotionDenied
FirstOfficerReview Approve CaptainApproval
CaptainApproval OfficerJustify FirstOfficerReview
CaptainApproval Deny PromotionDenied
CaptainApproval Approve PromotedOffLandingParty

Recalling the plots from Star Trek, there were times that the medical officer could declare the commanding officer unfit for duty. Since the Enterprise was originally equipped with our workflow, we want to make just a small addition – not a modification – and give McCoy the ability to allow a red shirt to opt out of the landing party duty.

Here’s where our plugin system comes in handy.  Instead of adding more states and or branches to our workflow we’ll check for certain conditions when Kirk makes his decisions, and execute actions.  In order to help out McCoy the following filter is created in a separate project:

public class CaptainUnfitForCommandFilter : FilterBase
{
  protected override Step Process(Step input)
  {
    if(input.CanProcess & input.State == "CaptainApproval")
    {
      bool kirkInfected = (bool)input.Parameters["KirkInfected"];

      if(kirkInfected & input.Answer == "Deny")
      {
        input.Parameters.Add("MedicalOverride", true);
        input.Parameters.Add("StarfleetEmail", true);
        input.ErrorList.Add("Medical Override of Command");
        input.CanProcess = false;
      }
    }

    return input;
  }
}

This plug-in is simple: check that the state is CaptainApproval and when the answer was “Deny” and Kirk has been infected, set the MedicalOverride flag and send Starfleet an email.

The class WorkflowScenarioTest.cs has the method CanAllowMcCoyToIssueUnfitForDuty() that demonstrates how the workflow will execute. We simply add the name of the plug-in to our list of post transition filters:

string postFilterNames = "MorePlugins.TransporterRepairFilter;Plugins.CaptainUnfitForCommandFilter;SaveDataFilter;";

This portion of code uses the plug-in:

//  Captain Kirt denies request, but McCoy issues unfit for command
parameters.Add("KirkInfected", true);

step.Answer = "Deny";
step.AnsweredBy = "Kirk";
step.Participants = "Kirk";
step.State = newState;

processor = new WorkflowProcessor(step, filterRegistry, workflow);
newState = processor.ConfigurePipeline(preFilterNames, postFilterNames)
  .ConfigureStateMachine()
  .ProcessAnswer()
  .GetCurrentState();

//  Medical override issued and email to Starfleet generated
bool medicalOverride = (bool)parameters["MedicalOverride"];
bool emailSent = (bool)parameters["StarfleetEmail"];

Assert.IsTrue(medicalOverride);
Assert.IsTrue(emailSent);

Now you don’t have to hesitate with paranoia each time you need introduce a variation into your workflows. No more small upsetters lurking around the corner. Plus you can deliver these changes faster to your biggest fan, your customer. Source code is here.   Run through the tests and experiment for your self.

Simple Workflows With ApprovaFlow and Stateless April 2, 2011

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Approvaflow, ASP.Net, C#, JSON.Net, New Techniques, Stateless.
Tags: , , , , ,
add a comment

This is the second in a series of posts for ApprovaFlow, an alternative to Windows Workflow written in C# and JSON.Net. Source code for this post is here.

Last time we laid out out goals for a simple workflow engine, ApprovaFlow, with the following objectives:
• Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties.
•. Allow the state of a workflow to be peristed as an integer, string. Quicky fetch state of a workflow.
•. Create pre and post nprocessing methods that can enforce enforce rules or carry out actions when completing a workflow task.
•. Introduce new functionality while isolating the impact of the new changes. New components should not break old ones
•.Communicate to the client with a standard set of objects. In other words, your solution domain will not change how the user interface will gather data from the user.
•. Use one. aspx page to processes user input for any type of workflow.
•. Provide ability to roll your own customizations to the front end or backend of your application.

The fulcrum point of all we have set out to do with ApprovaFlow is a state machine that will present a state and accept answers supplied by the users. One of Sensei’s misgivings about Windows Workflow is that it is such a behemoth when all you want to implement is a state machine.
Stateless, created Nicholas Blumhardt, is a shining example of adhering to the rule of “necessary and sufficient”. By using Generics Stateless allows you to create a state machine where the State and Trigger can be represented by an integer, string double, enum – say this sounds like it fulfills our goal:

•. Allow the state of a workflow to be persisted as an integer, string. Quicky fetch state of a workflow.
Stateless constructs a state machine with the following syntax:

var statemachine =
       new StateMachine(TState currentState);

For our discussion we will create a state machine that will process a request for promotion workflow. We’ll use:

var statemachine =
       new StateMachine(string currentstate);

This could very easily take the form of

<int, int>

and will depend on your preferences. Regardless of your choice, if the current state is represent by a primitive like int or string, you can just fetch that from a database or a repository and now your state machine is loaded with the current state. Contrast that with WF where you have multiple projects and confusing nomenclature to learn. Stateless just stays out of our way.
Let’s lay out our request for promotion workflow. Here is our state machine represented in English:

Step: Request Promotion Form
  Answer => Complete
  Next Step => Manager Review

Step: Manager Review
  Answer => Deny
  Next Step => Promotion Denied
  Answer => Request Info
  Next Step => Request Promotion Form
  Answer => Approve
  Next Step => Vice President Approve

Step: Vice President Approve
  Answer => Deny
  Next Step => Promotion Denied
  Answer => Manager Justify
  Next Step => Manager Review
  Answer => Approve
  Next Step => Promoted

Step: Promotion Denied
Step: Promoted

Remember the goal Model a workflow in a clear format that is readable by both developer and business user. One set of verbiage for all parties? We are very close to achieving that goal. If we substitute “Step” with “State” and “Answer” with “Trigger”, then we have a model that matches how Stateless configures a state machine:

var statemachine = new StateMachine(startState);

//  Request Promo form states
statemachine.Configure("RequestPromotionForm")
               .Permit("Complete", "ManagerReview");

//  Manager Review states
statemachine.Configure("ManagerReview")
               .Permit("RequestInfo", "RequestPromotionForm")
               .Permit("Deny", "PromotionDenied")
               .Permit("Approve", "VicePresidentApprove");

Clearly you will not show the code to your business partners or end users, but a simple chart like this should not make anyone’s eyes glaze over:

State: Request Promotion Form
  Trigger => Complete
  Target State => Manager Review

Before we move on you may want to study the test in the file SimpleStateless.cs. Here configuring the state machine and advancing from state to state is laid out for you:

//  Request Promo form states
statemachine.Configure("RequestPromotionForm")
                    .Permit("Complete", "ManagerReview");

//  Manager Review states
statemachine.Configure("ManagerReview")
                     .Permit("RequestInfo", "RequestPromotionForm")
                     .Permit("Deny", "PromotionDenied")
                     .Permit("Approve", "VicePresidentApprove");

//  Vice President state configuration
statemachine.Configure("VicePresidentApprove")
                      .Permit("ManagerJustify", "ManagerReview")
                      .Permit("Deny", "PromotionDenied")
                      .Permit("Approve", "Promoted");

//  Tests
Assert.AreEqual(startState, statemachine.State);

//  Move to next state
statemachine.Fire("Complete");
Assert.IsTrue(statemachine.IsInState("ManagerReview"));

statemachine.Fire("Deny");
Assert.IsTrue(statemachine.IsInState("PromotionDenied"));

The next question that comes to mind is how to represent the various States, Triggers and State configurations as data. Our mission on this project is to adhere to simplicity. One way to represent a Stateless state machine is with JSON:

{WorkflowType : "RequestPromotion",
  States : [{Name : "RequestPromotionForm" ; DisplayName : "Request Promotion Form"}
    {Name : "ManagerReview", DisplayName : "Manager Review"},
    {Name : "VicePresidentApprove", DisplayName : "Vice President Approve"},
    {Name : "PromotionDenied", DisplayName : "Promotion Denied"},
    {Name : "Promoted", DisplayName : "Promoted"}
    ],
  Triggers : [{Name : "Complete", DisplayName : "Complete"},
     {Name : "Approve", DisplayName : "Approve"},
     {Name : "RequestInfo", DisplayName : "Request Info"},
     {Name : "ManagerJustify", DisplayName : "Manager Justify"},
     {Name : "Deny", DisplayName : "Deny"}
  ],
StateConfigs : [{State : "RequestPromotionForm", Trigger : "Complete", TargetState : "ManagerReview"},
     {State : "ManagerReview", Trigger : "RequestInfo", TargetState : "RequestPromotionForm"},
     {State : "ManagerReview", Trigger : "Deny", TargetState : "PromotionDenied"},
     {State : "ManagerReview", Trigger : "Approve", TargetState : "VicePresidentApprove"},
     {State : "VicePresidentApprove", Trigger : "ManagerJustify", TargetState : "ManagerApprove"},
     {State : "VicePresidentApprove", Trigger : "Deny", TargetState : "PromotionDenied"},
     {State : "VicePresidentApprove", Trigger : "Approve", TargetState : "Promoted"}
  ]
}

As you can see we are storing all States and all Triggers with their display names. This will allow you some flexibility with UI screens and reports. Each rule for transitioning a state to another is stored in the StateConfigs node. Here we are simply representing our chart that we created above as JSON.

Since we have a standard way of representing a workflow with JSON de-serializing this definition to objects is straight forward. Here are the corresponding classes that define a state machine:

public class WorkflowDefinition
{
        public string WorkflowType { get; set; }
        public List States { get; set; }
        public List Triggers { get; set; }
        public List StateConfigs { get; set; }

        public WorkflowDefinition() { }
}

public class State
{
        public string Name { get; set; }
        public string DisplayName { get; set; }
}

public class Trigger
{
        public string Name { get; set; }
        public string DisplayName { get; set; }

        public Trigger() { }
}
public class StateConfig
{
        public string State { get; set; }
        public string Trigger { get; set; }
        public string TargetState { get; set; }

        public StateConfig() { }
}

We’ll close out this post with an example that will de-serialize our state machine definition and allow us to respond to the triggers that we supply. Basically it will be a rudimentary workflow. RequestionPromotion.cs will be the workflow processor. The method Configure is where we will perform the de-serialization, and the process is quite straight forward:

  1. Deserialize the States
  2. Deserialize the Triggers
  3. Deserialize the StateConfigs that contain the transitions from state to state
  4. For every StateConfig, configure the state machine.

Here’s the code:

public void Configure()
{
    Enforce.That((string.IsNullOrEmpty(source) == false),
                            "RequestPromotion.Configure - source is null");

    string json = GetJson(source);

    var workflowDefintion = JsonConvert.DeserializeObject(json);

    Enforce.That((string.IsNullOrEmpty(startState) == false),
                            "RequestPromotion.Configure - startStep is null");

    this.stateMachine = new StateMachine(startState);

    //  Get a distinct list of states with a trigger from state configuration
    //  "State => Trigger => TargetState
    var states = workflowDefintion.StateConfigs.AsQueryable()
                                    .Select(x => x.State)
                                    .Distinct()
                                    .Select(x => x)
                                    .ToList();

    //  Assing triggers to states
    states.ForEach(state =>
    {
        var triggers = workflowDefintion.StateConfigs.AsQueryable()
                                   .Where(config => config.State == state)
                                   .Select(config => new { Trigger = config.Trigger, TargeState = config.TargetState })
                                   .ToList();

        triggers.ForEach(trig =>
        {
            this.stateMachine.Configure(state).Permit(trig.Trigger, trig.TargeState);
        });
    });
}

And we advance the workflow with this method:

public void ProgressToNextState(string trigger)
{
Enforce.That((string.IsNullOrEmpty(trigger) == false),
"RequestPromotion.ProgressToNextState – trigger is null");

this.stateMachine.Fire(trigger);
}

The class RequestPromotionTests.cs illustrates how this works.

We we have seen how we can fulfill the objectives laid out for ApprovaFlow and have covered a significant part of the functionality that Stateless will provide for our workflow engine.   Here is the source code.

DataTablePager Now Has Multi-Column Sort Capability For DataTables.Net February 9, 2011

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Ajax, ASP.Net, C#, DataTables.Net, jQuery.
Tags: , , , , , , , , ,
21 comments

Some gifts just keep on giving, and many times things can just take on a momentum that grow beyond your expectation.  Bob Sherwood wrote to Sensei and pointed out that DataTables.net supports multiple column sorting.  All you do is hold down the shift key and click on any second or third column and DataTables will add that column to sort criteria.  “Well, how come it doesn’t work with the server side solution?”  Talk about the sound of one hand clapping.  How about that for a flub!  Sensei didn’t think of that!  Then panic set in – would this introduce new complexity to the DataTablePager solution, making it too difficult to maintain a clean implementation?  After some long thought it seemed that a solution could be neatly added.  Before reading, you should download the latest code to follow along.

How DataTables.Net Communicates Which Columns Are Involved in a Sort

If you recall, DataTables.Net uses a structure called aoData to communicate to the server what columns are needed, the page size, and whether a column is a data element or a client side custom column.  We covered that in the last DataTablePager post.  aoData also has a convention for sorting:

bSortColumn_X=ColumnPosition

In our example we are working with the following columns:

,Name,Agent,Center,,CenterId,DealAmount

where column 0 is a custom client side column, column 1 is Name (a mere data column), column 2 is Center (another data column), column 3 is a custom client side column, and the remaining columns are just data columns.

If we are sorting just by Name, then aoData will contain the following:

bSortColumn_0=1

When we wish to sort by Center, then by Name we get the following in aoData”

bSortColumn_0=2

bSortColumn_1=1

In other words, the first column we want to sort by is in position 2 (Center) and the second column(Name) is in position 1.  We’ll want to record this some where so that we can pass this to our order routine.  aoData passes all column information to us on the server, but we’ll have to parse through the columns and check to see if one or many of the columns is actually involved in a sort request and as we do we’ll need to preserve the order of that column of data in the sort.

SearchAndSortable Class to the Rescue

You’ll recall that we have a class called SearchAndSortable that defines how the column is used by the client.  Since we iterate over all the columns in aoData it makes sense that we should take this opportunity to see if any column is involved in a sort and store that information in SearchAndSortable as well.  The new code for the class looks like this:

public class SearchAndSortable
    {
        public string Name { get; set; }
        public int ColumnIndex { get; set; }
        public bool IsSearchable { get; set; }
        public bool IsSortable { get; set; }
        public PropertyInfo Property{ get; set; }
        public int SortOrder { get; set; }
        public bool IsCurrentlySorted { get; set; }
        public string SortDirection { get; set; }

        public SearchAndSortable(string name, int columnIndex, bool isSearchable,
                                bool isSortable)
        {
            this.Name = name;
            this.ColumnIndex = columnIndex;
            this.IsSearchable = isSearchable;
            this.IsSortable = IsSortable;
        }

        public SearchAndSortable() : this(string.Empty, 0, true, true) { }
    }

There are 3 new additions:

IsCurrentlySorted - is this column included in the sort request.

SortDirection - “asc” or “desc” for ascending and descending.

SortOrder - the order of the column in the sort request.  Is it the first or second column in a multicolumn sort.

As we walk through the column definitions, we’ll look to see if each column is involved in a sort and record what direction – ascending or descending – is required. From our previous post you’ll remember that the method PrepAOData is where we parse our column definitions. Here is the new code:

//  Sort columns
this.sortKeyPrefix = aoDataList.Where(x => x.Name.StartsWith(INDIVIDUAL_SORT_KEY_PREFIX))
                                            .Select(x => x.Value)
                                            .ToList();

//  Column list
var cols = aoDataList.Where(x => x.Name == "sColumns"
                                            & string.IsNullOrEmpty(x.Value) == false)
                                     .SingleOrDefault();

if(cols == null)
{
  this.columns = new List();
}
else
{
  this.columns = cols.Value
                       .Split(',')
                       .ToList();
}

//  What column is searchable and / or sortable
//  What properties from T is identified by the columns
var properties = typeof(T).GetProperties();
int i = 0;

//  Search and store all properties from T
this.columns.ForEach(col =>
{
  if (string.IsNullOrEmpty(col) == false)
  {
    var searchable = new SearchAndSortable(col, i, false, false);
    var searchItem = aoDataList.Where(x => x.Name == BSEARCHABLE + i.ToString())
                                     .ToList();
    searchable.IsSearchable = (searchItem[0].Value == "False") ? false : true;
    searchable.Property = properties.Where(x => x.Name == col)
                                                    .SingleOrDefault();

    searchAndSortables.Add(searchable);
  }

  i++;
});

//  Sort
searchAndSortables.ForEach(sortable => {
  var sort = aoDataList.Where(x => x.Name == BSORTABLE + sortable.ColumnIndex.ToString())
                                            .ToList();
  sortable.IsSortable = (sort[0].Value == "False") ? false : true;
                sortable.SortOrder = -1;

  //  Is this item amongst currently sorted columns?
  int order = 0;
  this.sortKeyPrefix.ForEach(keyPrefix => {
    if (sortable.ColumnIndex == Convert.ToInt32(keyPrefix))
    {
      sortable.IsCurrentlySorted = true;

      //  Is this the primary sort column or secondary?
      sortable.SortOrder = order;

     //  Ascending or Descending?
     var ascDesc = aoDataList.Where(x => x.Name == "sSortDir_" + order)
                                                    .SingleOrDefault();
     if(ascDesc != null)
     {
       sortable.SortDirection = ascDesc.Value;
     }
   }

   order++;
 });
});

To sum up, we’ll traverse all of the columns listed in sColumns. For each column we’ll grab the PorpertyInfo from our underlying object of type T. This gives only those properties that will be displayed in the grid on the client. If the column is marked as searchable, we indicate that by setting the IsSearchable property on the SearchAndSortable class.  This happens starting at line 28 through 43.

Next we need to determine what we can sort, and will traverse the new list of SearchAndSortables we created. DataTables will tell us what if the column can be sorted by with following convention:

bSortable_ColNumber = True

So if the column Center were to be “sortable” aoData would contain:

bSortable_1 = True

We record the sortable state as shown on line 49 in the code listing.

Now that we know whether we can sort on this column, we have to look through the sort request and see if the column is actually involved in a sort.  We do that by looking at what DataTables.Net sent to us from the client.  Again the convention is to send bSortColumn_0=1 to indicate that the first column for the sort in the second item listed in sColumns property.  aoData will contain many bSortColum’s so we’ll walk through each one and record the order that column should take in the sort.  That occurs at line 55 where we match the column index with the bSortColumn_x value.

We’ll also determine what the sort direction – ascending or descending – should be.  At line 63 we get the direction of the sort and record this value in the SearchAndSortable.

When the method PrepAOData is completed, we have a complete map of all columns and what columns are being sorted, as well as their respective sort direction.  All of this was sent to us from the client and we are storing this configuration for later use.

Performing the Sort

(Home stretch so play the song!!)

If you can picture what we have so far we just basically created a collection of column names, their respective PropertyInfo’s and have recorded which of these properties are involved in a sort.  At this stage we should be able to query this collection and get back those properties and the order that the sort applies.

You may already be aware that you can have a compound sort statement in LINQ with the following statement:

var sortedCustomers = customer.OrderBy(x => x.LastName)
                                           .ThenBy(x => x.FirstName);

The trick is to run through all the properties and create that compound statement. Remember when we recorded the position of the sort as an integer? This makes it easy for us to sort out the messy scenarios where the second column is the first column of a sort. SearchAndSortable.SortOrder takes care of this for us. Just get the data order by SortOrder in descending order and you’re good to go. So that code would look like the following:

var sorted = this.searchAndSortables.Where(x => x.IsCurrentlySorted == true)
                                     .OrderBy(x => x.SortOrder)
                                     .ToList();

sorted.ForEach(sort => {
             records = records.OrderBy(sort.Name, sort.SortDirection,
             (sort.SortOrder == 0) ? true : false);
});

On line 6 in the code above we are calling our extension method OrderBy in Extensions.cs. We pass the property name, the sort direction, and whether this is the first column of the sort. This last piece is important as it will create either “OrderBy” or the “ThenBy” for us. When it’s the first column, you guessed it we get “OrderBy”. Sensei found this magic on a StackOverflow post by Marc Gravell and others.

Here is the entire method ApplySort from DataTablePager.cs, and note how we still check for the initial display of the data grid and default to the first column that is sortable.

private IQueryable ApplySort(IQueryable records)
{
  var sorted = this.searchAndSortables.Where(x => x.IsCurrentlySorted == true)
                                                .OrderBy(x => x.SortOrder)
                                                .ToList();

  //  Are we at initialization of grid with no column selected?
  if (sorted.Count == 0)
  {
    string firstSortColumn = this.sortKeyPrefix.First();
    int firstColumn = int.Parse(firstSortColumn);

    string sortDirection = "asc";
    sortDirection = this.aoDataList.Where(x => x.Name == INDIVIDUAL_SORT_DIRECTION_KEY_PREFIX +                                                                    "0")
                                                    .Single()
                                                    .Value
                                                    .ToLower();

    if (string.IsNullOrEmpty(sortDirection))
    {
      sortDirection = "asc";
    }

    //  Initial display will set order to first column - column 0
    //  When column 0 is not sortable, find first column that is
    var sortable = this.searchAndSortables.Where(x => x.ColumnIndex == firstColumn)
                                                        .SingleOrDefault();
    if (sortable == null)
    {
      sortable = this.searchAndSortables.First(x => x.IsSortable);
    }

    return records.OrderBy(sortable.Name, sortDirection, true);
  }
  else
  {
      //  Traverse all columns selected for sort
      sorted.ForEach(sort => {
                             records = records.OrderBy(sort.Name, sort.SortDirection,
                            (sort.SortOrder == 0) ? true : false);
      });

    return records;
  }
}

It’s All in the Setup

Test it out. Hold down the shift key and select a second column and WHAMO – multiple column sorts! Hold down the shift key and click the same column twice and KAH-BLAMO multiple column sort with descending order on the second column!!!

The really cool thing is that our process on the server is being directed by DataTables.net on the client.  And even awseomer is that you have zero configuration on the server.  Most awesome-est is that this will work with all of your domain objects, because we have used generics we can apply this to any class in our domain.  So what are you doing to do with all that time you just got back?

Moncai – A Cloud Service for Mono and .Net December 2, 2010

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Linux, Mono, New Techniques, Open Source.
Tags: , , , , , , ,
add a comment

If you have read these tomes of insanity posted by yours truly, you know that Sensei likes to stretch when it comes to finding solutions.  Aspiring to be an action hero in the everyday field of software development means you have to work like a dog, hunt like a tiger and crouch like a cricket.  This also means that you have to be flexible and willing to try new things.

Moncai, a service that will deploy your .Net / Mono app to the cloud via Git or Mercurial, looks very promising for those who want to try their hand at running their .Net application in the Linux realm.  As opposed to Azure, Moncai will offer POSIX distros for you to use.  The man behind the scenes, Dale Ragan, recently talked about Moncai in a HerdingCode podcast.  What he describes is a tiered approach to levels of service that you can have.  Dale wants to offer the hobbyist or midnight blogger a chance to experiment for free / low cost, and the services levels increase depending on your needs.  Dale even takes the time to communicate you via email when your first sign up, a real nice touch.  Go check it out and spread the word.

How Embedded Scripting Makes Dynamically Generated Test Data Possible in ASP.Net – DataBuilder Part 2 November 6, 2010

Posted by ActiveEngine Sensei in .Net Development, ActiveEngine, ASP.Net, C#, CS-Script, DataBuilder, JSON.Net, NBuilder, Problem Solving.
Tags: , , , , , ,
add a comment

Part 1 of a 3 part series.  For the latest DataBuilder capabilities, read this post or download the new source code from here.

Last episode Sensei unveiled a useful little tool called DataBuilder.  DataBuilder helps you to generate test data for you domain objects.  Just point DataBuilder to your assemblies, and with the magic of NBuilder, CS-Script you can create test data as JSON.  How is this possible?  This post will focus on the behind the scenes magic that makes DataBuilder so flexible.

The main problem that DataBuilder solves is that to create test data for your classes you normally need to fire up Visual Studio and a project, create code, compile, etc. to produce anything and this can cause needless context switching and headache.  What if you wish to simply wish to mock up a UI and need some data sets to work with?  DataBuilder helps in that you can create test data for any existing assembly.  You can also create different types of test data based on what ever criteria you need.  This is accomplished by taking the input supplied in the Snippet Editor screen, compiling it to an in-memory assembly and executing it.  No need to fire up Visual Studio and add a TestGeneration project to your .Net solution.

The “dynamic” nature of DataBuilder is implemented with CS-Script.  In short, CS-Script is an embedded scripting system that uses ECMA-compliant C #, with full access to the CLR and OS.  For an in-depth review see  Oleg Shilo’s fantastic article on CodeProject where he describes his product.

As Oleg describes, CS-Script will compile your code into an assembly, load that assembly into a separate app domain, then execute that assembly.  There are two scenarios that can be used to host your script.  They are the Isolated Execution Pattern, where the host and script have no knowledge of each other, and the  Simplified Hosting Model for two way type sharing between the host and the script.  The Simplified Hosting Model allows the script file to access assemblies loaded in the host, as well as pass back data to the host.  DataBuilder uses the Simplified Host Model.

Before we get into the particular DataBuilder code, let’s review some samples that Oleg has provided.  The scenario presented is when you wish to remotely load a script and execute it, and the recommendation is to user interface inheritance to avoid the task of using reflection to invoke the method.

// Host contains this interface:
public interface IWordProcessor
{
void CreateDocument();
void CloseDocument();
void OpenDocument(string file);
void SaveDocument(string file);
}

//  The script file implements the interface
public class WordProcessor: IWordProcessor
{
public void CreateDocument() { ... }
public void CloseDocument() { ... }
public void OpenDocument(string file) { ... }
public void SaveDocument(string file) { ... }
}

//  Host executes the script
AsmHelper helper = new AsmHelper(CSScript.Load("script.cs", null, true));

//the only reflection based call
IWordProcessor proc = (IWordProcessor)helper.CreateObject("WordProcessor");

//no reflection, just direct calls
proc.CreateDocument();
proc.SaveDocument("MyDocument.cs");

There are other methods for invoking methods and scripts. It’s well worth your time reading through the script hosting guidelines as Oleg covers performance, reflection, interface alignment with duck typing and other facets that are important to CS-Script.

Now let’s focus on DataBuilder’s embedded scripting implementation.  DataBuilder uses the interface inheritance approach to execute the script that you supply.  Here’s the interface:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace DataGenerator.ScriptHost
{
    public interface IScriptRunner
    {
        void RunScript();
        void RunScript(Dictionary<string, object> parameters);
    }
}

And here is an implementation of the interface:

//CSScript directives - DO NOT REMOVE THE css_ref SECTION!!!
//css_ref System.Core;
//css_ref System.Data.ComponentModel;
//css_ref System.Data.DataSetExtensions;
//css_ref System.Xml.Linq;

using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using DataGenerator.Core;
using DataGenerator.ScriptHost;
using System.Linq.Expressions;
using System.Linq;
using Newtonsoft.Json;
using FizzWare.NBuilder;
//  Add a reference to your assemblies as well!!
using UnRelatedAssembly;

public class CreateTestFile : IScriptRunner
{
    public void  RunScript(Dictionary<string,object> parameters)
    {
        var agents = Builder<SalesAgent>.CreateListOfSize(5)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName = "Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell")
                    .Build()
                    .ToList();

        parameters["JsonDataSet"] = JsonConvert.SerializeObject(agents);
    }

    public void  RunScript()
    {
 	    throw new NotImplementedException();
    }
}

The script host is derived from ScriptHostBase.  ScriptHostBase will perform the compilation of your script with the method CompileScript(), as well as fetching any remote assemblies that you want to include.  This is a great point of flexibility as it allows you to point to any assembly that you have access to.  Assemblies can come from multiple locations, and as long as you know the namespaces you can include the classes from those assemblies in your scripts.

        /// <summary>
        /// Compile a script and store in a runner object for later
        /// execution
        /// </summary>
        protected void CompileScript()
        {
            if(string.IsNullOrEmpty(this.Script))
            {
                throw new ArgumentNullException("ScriptHostBase - CompileScript : Script can not be blank");
            }

            if (string.IsNullOrEmpty(this.TypeName))
            {
                throw new ArgumentNullException("ScriptHostBase - CompileScript : TypeName can not be blank");
            }

            //  Has an assembly already been loaded?
            string names = string.Empty;
            AppDomain appDomain = AppDomain.CurrentDomain;

            var assemblyPaths = appDomain.GetAssemblies()
                                    .ToList()
                                    .Select(x => x.FullName)
                                    .ToList();

            var fizzWare = assemblyPaths.Where(x => x.Contains("FizzWare.NBuilder"))
                                            .SingleOrDefault();

            var assemblyLoadList = new List<string>();
            assemblyLoadList = this.AssemblyPaths.ToList();

            //  Load if needed
            if (fizzWare != null)
            {
                string remove = assemblyLoadList
                                     .Where(x => x.Contains("FizzWare.NBuilder"))
                                     .SingleOrDefault();
                assemblyLoadList.Remove(remove);
            }
            else
            {
                string path = ConfigurationManager.AppSettings["FizzWarePath"].ToString();
                assemblyPaths.Add(path);
            }

            Assembly compiler = CSScript.LoadCode(this.Script, assemblyLoadList.ToArray());
            AsmHelper asmHelper = new AsmHelper(compiler);
            this.runner = asmHelper.CreateObject(this.TypeName);
        }

You may be scratching your head at the lines of code that explicitly load FizzWare.NBuilder(26 -43).  When first constructing DataBuilder, Sensei struggled with getting NBuilder to compile with the new script.  CS-Script uses an algorithm to probe directories for assemblies as well as probing scripts to resolve namespaces.  In some cases, this probe will NOT locate a namespace based on the naming conventions of an assembly. CS-Script has provisions for handling those scenarios allowing you to specifically load an assembly.  The issue Sensei had at first was that the first execution of a script would complete successfully as NBuilder would be loaded.  The problem lay with the second run of the script, as an exception would be thrown claiming that NBuilder was already loaded and hence there was no need to explicitly load it again!  The work around is to query the loaded assemblies and if NBuilder is loaded, remove that path to FizzWare.NBuilder assembly from the AssemblyPaths list and prevent the script from reloading NBuilder.

Classes derived from ScriptHostBase are responsible for implementing ExecuteScript method.  In this implementation StringScriptHost derives from ScriptHostBase and has the following ExecuteScript method:

        /// <summary>
        /// Compile a script and invoke
        /// </summary>
        public override void ExecuteScript()
        {
            base.CompileScript();

            IScriptRunner scriptRunner = (IScriptRunner)this.runner;
            scriptRunner.RunScript(Parameters);
        }

Other script hosts can be created and handle scenarios where scripts stored in a document database, text fields in SQL Server or other scenarios.

The process of including your import statements, locating any scripts located on a share and passing parameters to scripts is all controlled by the ScriptController.  There are two constructors with one allowing you to specify the script location:

public ScriptController(string scriptShare){}

With the ScriptController you can execute snippets that you type free form with the method ExecuteSnippet.

public void ExecuteSnippet(string snippet, Dictionary<string, object> parameters)
        {
            Enforce.ArgumentNotNull<string>(snippet, "ScriptController.ExecuteAdHoc - snippet can not be null");

            //  Wrap snippet with class declaration and additional using ;
            snippet = snippetHeader + this.UsingFragment + snippetClassName +
                        snippet + snippetFooter;

            var scriptHost = new StringScriptHost();
            scriptHost.Script = snippet;
            scriptHost.TypeName = "AdHoc";
            scriptHost.Parameters = parameters;
            scriptHost.AssemblyPaths = this.assemblyPaths.ToArray();

            scriptHost.ExecuteScript();
        }

Another method ExecuteScript is used for executing script files that you have save on a share.  As you read through the ExecuteSnippet method, you’ll note that the controller will combine the required import and namespace methods.  It’s really just concatenating strings to build a complete script in the format displayed above in the CreateTestFile.cs code.

You create a Dictionary<string, object> called parameters and pass this to the ScriptController.Execute methods.  This allows you great flexibility as you can allow the scripts to instantiate different objects and return them to the host application for further use.  In the case of DataBuilder we are expecting a JsonDataSet object which is our serialized test data in the form of JSON.

That’s it.  Hopefully you find DataBuilder and these posts useful.  CS-Script is quite powerful and can allow you to execute operations without the need to constantly recompile your projects.  It also allows to execute operations dynamically.  DataBuilder would not be possible without it.  When duty calls and fluent solutions are needed, CS-Script and embedded scripting are pretty incredible. Source code is here.

Dynamically Create Test Data with NBuilder, JSON and .Net October 24, 2010

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Ajax, ASP.Net, C#, Fluent, LINQ, Open Source, Problem Solving.
Tags: , , , , ,
5 comments

Part 1 of a 3 part series.  For the latest DataBuilder capabilities, read this post or download the new source code from here.

Building test data should be as easy:

var agentList = Builder<SalesAgent>.CreateListOfSize(5)
                           .WhereTheFirst(1)
                                  .Have(x => x.FirstName = "James")
                                  .And(x => x.LastName = "Kirk")
                            .AndTheNext(1)
                                  .Have(x => x.FirstName = "Bruce")
                                  .And(x => x.LastName = "Campbell")
                            .Build()
                            .ToList();

Wouldn’t be nice if all the properties of your objects were automatically populated:

Product:
       Id              : 1
       Title           : "Title1"
       Description     : "Description1"
       QuantityInStock : 1

NBuilder by provides you with a great fluent interface to accomplish this with ease.  You can even achieve scenarios where you can create hierarchies of data, set property values on a range objects in a list, and even create a specified range of values that you can use populate other objects.  Peruse through the samples and you will see, NBuilder quite capably maps values  the public properties of your objects.  A real time saver.

Sensei is going to kick it up a notch and provide you with a means to create test data with out having to recompile your projects.  This is ideal for when you want to create UI prototypes.  DataBulider uses CS-Script and NBuilder to create a web based data generation tool that can read assemblies and will allow you to script a process that will generate test data in the form of JSON.

This adventure is split into two parts.  First a quick demo, then instructions on how to configure DataBuilder for you environment.  A deeper discussion of CS-Script and embedded scripting in .Net will be part of the sequel to this action/adventure, as we all know the second movie in the series is always the best!.

Operating DataBuilder

In short you have three things to do:

  • Identify the assemblies that contains the objects you want to generate test data for.  The path to the files can be anywhere on your system.  For convenience there is an folder called Assembly that you can copy the files to.  Multiple assemblies from different locations can be imported.
  • Create the import statements.
  • Create the code snippet with the NBuilder statements that will generate your data.

Here’s a screen shot of DataBuilder with each section that corresponds with the three goals stated above.

And here is an example that we’ll be working with.

var agents = Builder<SalesAgent>.CreateListOfSize(5)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName ="Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell"})
                    .Build()
                    .ToList();

parameters["JsonDataSet"] = JsonConvert.SerializeObject(agents);

Note that after the end of the code that creates the objects, you need to include a statement

parameters["JsonDataSet"] = JsonConvert.SerializeObject(List);

Without that statement you will not get your data serialized.  If you’ve entered the data as shown, hit the Build button and the resulting JSON is placed in the output box.  That’s it.  Looking through the output you’ll note that the first two sales dudes are James Kirk and Bruce Campbell, while the remaining records are completed by NBuilder.

[{"FirstName":"James","LastName":"Kirk","Salary":1.0,"RegionId":1,"RegionName":"RegionName1","StartDate":"\/Date(1287892800000-0400)\/"},{"FirstName":"Bruce","LastName":"Campbell","Salary":2.0,"RegionId":2,"RegionName":"RegionName2","StartDate":"\/Date(1287979200000-0400)\/"},{"FirstName":"FirstName3","LastName":"LastName3","Salary":3.0,"RegionId":3,"RegionName":"RegionName3","StartDate":"\/Date(1288065600000-0400)\/"},{"FirstName":"FirstName4","LastName":"LastName4","Salary":4.0,"RegionId":4,"RegionName":"RegionName4","StartDate":"\/Date(1288152000000-0400)\/"},{"FirstName":"FirstName5","LastName":"LastName5","Salary":5.0,"RegionId":5,"RegionName":"RegionName5","StartDate":"\/Date(1288238400000-0400)\/"}]

You also can load a script and execute it as well.  That’s done on the “Script Loader” tab.  The location of the scripts is set in the WebConfig and the key name is ScriptPath.  Here’s the screen shot:

Anatonomy of DataBuilder Script

Here’s the complete C# script file that builds your data.  It’s just a class:

//CSScript directives - DO NOT REMOVE THE css_ref SECTION!!!
//css_ref System.Core;
//css_ref System.Data.ComponentModel;
//css_ref System.Data.DataSetExtensions;
//css_ref System.Xml.Linq;

using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using DataGenerator.Core;
using DataGenerator.ObjectTypes;
using DataGenerator.ScriptHost;
using System.Linq.Expressions;
using System.Linq;
using Newtonsoft.Json;
using FizzWare.NBuilder;
//  Add a reference to your assemblies as well!!
using UserDeploymentDomain;

public class CreateTestFile : IScriptRunner
{
    public void  RunScript(Dictionary parameters)
    {
        var agents = Builder.CreateListOfSize(5)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName = "Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell")
                    .Build()
                    .ToList();

        parameters["JsonDataSet"] = JsonConvert.SerializeObject(agents);
    }

    public void  RunScript()
    {
 	    throw new NotImplementedException();
    }
}

The very top section “CSScript Directives” is required by CS-Script.  These are directives that instruct the CS-Script engine to include assemblies when it compiles the script.  The imports section is pretty straight forward.

You’ll note that the script inherits from an interface.  This is a convention used by CS-Script to allow the host and script to share their respective assemblies.  Sensei will discuss that in next post.  The RunScript method accepts a Dictionary that contains the parameters.  This will house the JsonDataSet that is expected for the screen to display the output of your data.

Advanced NBuilder Experiments
The beauty of NBuilder is that you can create test data that goes beyond “FirstName1″, and allows you to quickly create data that matches what the business users are used to seeing. If you think about it you should be able to generate test data that will exercise any rules that you have in the business domain, such as “Add 5% tax when shipping to New York”. With the scripting capability of DataBuilder you can create suites test data that can evolve as you test your system. You could also use the JsonDataSet to create mocks of your objects as well, maybe use them for prototyping your front end.

We’ll do a quick sample. Our scenario is to create assign real regions to sales agents. Furthermore, we want to only chose a range of regions and assign them at random.

First we build the Regions:

var regions= Builder<Region>.CreateListOfSize(4)
	.WhereTheFirst(1)
		.Have(x => x.State = "Texas")
	.AndTheNext(1)
		.Have(x => x.State = "California")
	.AndTheNext(1)
		.Have(x => x.State = "Ohio")
	.AndTheNext(1)
		.Have(x => x.State = "New York")
	.Build();

Now we’ll create a SalesAgents and using the Pick method from NBuilder we’ll randomly assign a region to the sales agents:

var agents = Builder<SalesAgent>.CreateListOfSize(5)
                    .WhereAll()
                           .HaveDoneToThem(x => x.RegionName = Pick.RandomItemFrom(regions).State)
                    .WhereTheFirst(1)
                         .Have(x => x.FirstName = "James")
                         .And(x => x.LastName = "Kirk")
                    .AndTheNext(1)
                          .Have(x => x.FirstName = "Bruce")
                          .And(x => x.LastName = "Campbell")
                    .Build()
                    .ToList();

The result set now has the range of states distributed to the Sales Agents. Looks like James Kirk needs to cover Texas. You may need to view the source to see the output.

[{"FirstName":"James","LastName":"Kirk","Salary":1.0,"RegionId":1,"RegionName":"Texas","StartDate":"\/Date(1287892800000-0400)\/"},{"FirstName":"Bruce","LastName":"Campbell","Salary":2.0,"RegionId":2,"RegionName":"Texas","StartDate":"\/Date(1287979200000-0400)\/"},{"FirstName":"FirstName3","LastName":"LastName3","Salary":3.0,"RegionId":3,"RegionName":"California","StartDate":"\/Date(1288065600000-0400)\/"},{"FirstName":"FirstName4","LastName":"LastName4","Salary":4.0,"RegionId":4,"RegionName":"California","StartDate":"\/Date(1288152000000-0400)\/"},{"FirstName":"FirstName5","LastName":"LastName5","Salary":5.0,"RegionId":5,"RegionName":"Ohio","StartDate":"\/Date(1288238400000-0400)\/"}]

Configure DataBuilder For Your Environment
Given that DataBuilder is loading assemblies you will want to run it on either your dev environment or on a test server where your co workers won’t mind if you need to take IIS up and down. Also, you’ll want to work with a copy of your assemblies in case you need to make a quick change. There are times when IIS will not release a file and if you need to make changes to the assemblies themselves it’s more convenient to copy them after you’ve re-compiled.

There are two settings you need to change in the WebConfig to match your environment.

ScriptPath - Point this to the share where you want to save any scripts. DataBuilder will scour the directory and list anything you place in there.

FizzWarePath - This needs to point to the location of the NBuilder dll. Most likely this will be the bin folder of the DataBuilder website. In the follow up post Sensei will explain what this does.

Wrapping Up For Now

We covered a lot on the whirlwind tour of DataBuilder.  There’s a lot more that is of interest, particularly with respects to the embedded scripting aspects provided by CS-Script.  For now, have fun playing building you data sets.  In the next installment we’ll cover the scripting aspect in more detail  For now, download and experiment.  Here’s the source for DataBuilder with unit tests.

Deserializing to Persistent AnonymousTypes with JSON.Net October 9, 2010

Posted by ActiveEngine Sensei in .Net, .Net Development, ActiveEngine, C#, Problem Solving.
Tags: , , ,
1 comment so far

A few weeks back Sensei unleashed a crazy idea regarding a class AnonymousType that could persist values from an anonymous object.  The AnonymousType, created by Hugo Benocci models an individual object.  In a sense this is a hyper-charged Dictionary of properties that represent an object.  It’s meta data.  This is similar to a concept called the Adaptive Object Model, the theory that you create mechanisms to describe what your objects should do.   Instead of having a class for SalesAgent or Car you have classes that represent the classes, attributes, relationships and behavior in your domain.  In other words, you create a meta data modeler and feed it the criteria that would represent SalesAgent, Car, etc.

Having a “sound-of-one-hand-clapping” moment, Sensei realized that while “Persistent AnonymousTypes” was in the title of the post, no mechanism for for serializing the AnonymousType was included!!  “What the …”.  Jeeezz!  “Hell, that should be easy”, Sensei says.  Grab JSON.Net and with elbow grease make it work, right?  Anybody?

One thing that should be immediately clear is that all the meta data is locked up in the AnonymousType object, so you can’t just write:

string json = JsonConvert.SerializeObject(anonymousType);

Instead we need a way represent all the properties of our AnonymousType and preserve each property’s name, it’s type, and the underlying value.  Something like:

public class NameTypeValue
{
  public string Name { get; set; }
  public Type Type{get; set;}
  public object Value { get; set; }
}

And wouldn’t it be nice if we could take a serialized stream of an actual object and convert that into an AnonymousType?  Thinking further ahead, it would rather easy to pass around a list of NameTypeValues as you could easily send and receive this object from a web client or other front end, building yourself a modelling or code generation tool.

Serializing the object depicted above is pretty trivial.  Using a Func<Dictionary<string,object>, string,  string> we can serialize any way wish as with two tiny methods:

public string ToJSON(Func, string, string> function, string jsonObjectName)
{
    return function(_Values, jsonObjectName);
}
///  Method to serialize.  You can come up with your own!!
public string SerializeWithJObject(Dictionary values, string name)
{
  var jsonObject = new JObject();

  foreach (KeyValuePair property in values)
  {
    jsonObject.Add(new JProperty(property.Key, property.Value));
  }

  return jsonObject.ToString();
}

If there is another mechanism for serialization that you wish to use you are free to come up with your own.  For illustration here is the JSON output of an AnonymousType for a sales agent, and followed by the JSON for an actual Agent object:

Agent JSON ==>{“Name”:”Sales Guy Rudy”,”Department”:45}

AnonymousType JSON ==>{  “Name”: “Sales Guy Rudy”,  “Department”: 45}

 

Now that we can simply serialize our AnonymousType with the output matching that of an actual object,  we just need a way to interpret a JSON stream and build an AnonymousType.  Along with discussion, Sensei will talk about the second “sound-of-one-hand-clapping” moment he had when working with JSON.Net.  As you may have already surmised, you need to describe the Type of property in order deserialization to happen.  Sensei didn’t and took a trip to the valley for frustration.

Ok.  We have stream of JSON with the Name, Value and Type of each property for an object.  AnonymousType has a Set method to set a new property:

        /// <summary>
        /// Sets the value of a property on an anonymous type
        /// </summary>
        /// <remarks>Anonymous types are read-only - this saves a value to another location</remarks>
        public void Set(string property, object value) {
            this.Set<object>(property, value);
        }

        /// <summary>
        /// Sets the value of a property on an anonymous type
        /// </summary>
        /// <remarks>Anonymous types are read-only - this saves a value to another location</remarks>
        public void Set<T>(string property, T value) {

            //check for the value
            if (!this.Has(property)) {
                this._Values.Add(property, value);

            }
            else {

                //try and return the value
                try {
                    this._Values[property] = value;
                }
                catch (Exception ex) {
                    throw new Exception(
                        string.Format(
                            AnonymousType.EXCEPTION_COULD_NOT_ACCESS_PROPERTY,
                            property,
                            (value == null ? "null" : value.GetType().Name),
                            ex.Message
                            ),
                            ex);
                }
            }

        }

It’s pretty straight forward to accept a NameTypeValue object and perform:

public void AddProperty(string objectName, NameTypeValue nameTypeValue)
{
 //  Object doesn't exist?  Add.
 if (objects.ContainsKey(objectName) == false)
 {
 objects.Add(objectName, new List());
 }

 var properties = objects[objectName];

 //  All properties are unique
 var existingProperty = properties.Where(x => x.Name == nameTypeValue.Name)
 .SingleOrDefault();

 if(existingProperty == null)
 {
 properties.Add(nameTypeValue);
 }
}

and taking this a step further, a List<NameTypeValue> can supply all properties for an object:

properties.ForEach(x => { anonymousType.Set(x.Name, x.Value); });

Accepting a JSON stream of a List<NameTypeValue> should be easy-cheesey mac-n-peasey.  The first version of this looked like the following:

public AnonymousType DeserializeFromJSONProperties(string objectName, string json)
{
  Enforce.ArgumentNotNull(objectName, "AnonFactory.Deserialize - objectName can not be null");
  Enforce.ArgumentNotNull(json, "AnonFactory.Deserialize - json can not be null");

  List propertyList = JsonConvert.DeserializeObject
>(json);

  //  Add properties.  Make sure int is not deserialized to a long since JSON.Net
  //  makes best guess
  propertyList.ForEach(x => AddProperty(objectName, x));

  return CreateAnonymousType(objectName);
}

But one-moooorrree-thing!  Sensei discovered that JSON.Net, when presented with an integer like 5, will deserialize to the largest possible type when not presented with a target.  In other words, when you have this JSON:

{“Department” : 45}

and deserialize to an object, it must accommodate the largest possible type in order to avoid truncating the data.  That means an int is deserialized as Int64!!  The first round of testing was quite aggravating as AnonymousType would accept the property into it’s schema, but when you went to fetch that value later on you would get an exception  In other words, when you did this:

//  Found in JSONTests.MakeItFail()
var anonFactory = new AnonFactory();
var darrellDept = new NameTypeValue();
darrellDept.Name = "Department";
darrellDept.Value = 45;

var darrellName = new NameTypeValue();
darrellName.Name = "Name";
darrellName.Value = "Darrell";

var propertyList = new List();
propertyList.Add(darrellDept);
propertyList.Add(darrellName);

//  Create JSON stream of properties
string darrellPropertyJSON = JsonConvert.SerializeObject(propertyList);

//  Try to deserialize and create an AnonymousType object
var otherDarrell = anonFactory.DeserializeFromJSONProperties("Agent", darrellPropertyJSON);
Assert.AreEqual(otherDarrell.Get("Department"), 45);

you got an InvalidCastException.

Luckily you have the Type so you can perform a conversion as you deserialize the property and add it to AnonymousType’s Dictionary<string, object>.  Here’s the new version:

propertyList.ForEach(x => AddProperty(objectName, ConvertTypeFromDefinition(x)));

private NameTypeValue ConvertTypeFromDefinition(NameTypeValue nameTypeValue)
{
  if (nameTypeValue.Type != nameTypeValue.Value.GetType())
  {
    nameTypeValue.Value = Convert.ChangeType(nameTypeValue.Value, nameTypeValue.Type);
  }

  return nameTypeValue;
}

When you look at the new version of the AnonymoustType project you’ll note that serializing is handled by the AnonymousType itself, while a factory class is used for building the an AnonymousType from the NameTypeValue’s and for deserializing JSON as well.  Sensei struggled a bit with this, as on the one hand if AnonymousType was responsible for serializing itself should it also be able to deserialize a stream?  On the other hand, a factory seemed logical since you could have a registry of AnonymousType objects, thereby centralizing the creation and management of AnonymousTypes.  Don’t like it – create your own and share!  Regardless, looks like we can fit through the mini-Stargate now.  Here’s version 2.

Janga – A Validation Framework with a Fluent API September 26, 2010

Posted by ActiveEngine Sensei in .Net, ActiveEngine, Business Processes, C#, Design Patterns, Expression Trees, Fluent, LINQ, New Techniques, Problem Solving.
Tags: , , , , , ,
add a comment

Why can’t we  write code that read likes this:

bool passed = employee.Enforce()
                    .When("Age", Compares.IsGreaterThan, 45)
                    .When("Department", Compares.In, deptList)
                    .IsValid();
if(passed)
{
    SomeProcess();
}

One of the enduring challenges for software developers and business is to create abstractions that accurately represent concrete rules for business operations.  As opposed to operating like our tribal ancestors where you had to kill a goat, start a fire and listen to the blind boy tell the tale told for thousands of years, today we’d like to be able to read stories ourselves.  Hopefully the story that we read matches the reality of what we have implemented in our code.  Many nested if statements can quickly make verifying that the code matches the story very difficult.

A fluent validation API can assist with this.  Look at the code at the top of the post.  You can show that most people without having to get out the smelling salts.  For your fellow developers its creates a succinct way to express precisely what the logic is. They’ll love you for it.

Janga, a fluent validation framework for creating such an API.  There are three goals to be met here, and Janga fulfills these goals:

Goal 1 – Be able to chain “When” clauses together.  Each test – represented by the “When” clause – needs to chained together.

Goal 2 – Accept a test on any object property where the test criteria is defined in the form of x <= y at runtime.  The types of objects and their properties will not be known until runtime, so our framework must be able to analyze an object and construct a test against each property as it is presented.  This is NOT the specification pattern, where you define a delegates ahead of time.

Goal 3 –  Flexibly handle errors by either halting on the first error, or by proceeding with each test and logging each error as it is encountered.

The code Sensei will present here fulfills all of these goals and gives us the fluent magic we see in the sample at the top of this post.  Before we delve into the details, the sources for the ideas and explanations of Lambda Expressions, fluent apis, Expression trees,  should be acknowledged and applauded, because they got Sensei thinking along the right path:

Fluent Validation API

Roger Alsing – Fluent Argument Validation Specification

Raffaele Garofalo – How to write fluent interface with C# and Lambda.

Lambdas, Expression Trees, Delegates, Predicates

Expression Tree Basics – Charlie Calvert’s Community Blog

Marc Gravell – Code, code and more code.: Explaining Expression

Marc Gravell – Code, code and more code.: Express yourself

Implementing Dynamic Searching Using LINQ (check the section regarding dynamic expressions.)

Creating this api is a twisted cluster-wack of a zen puzzle.  The code for this solution consists of one class and three extension methods.  We’ll make use of generics, delegates and expression trees to evaluate our When clauses.  In the end we’ll see that with very little code we get a lot of mileage.  It took Sensei a long time to wrap his head around how to piece all of these things together, so hopefully the explanation will be clear.  Note that the solution has tests that demonstrate how to use the framework, so if you want to skip the madness and just try things out, go for it.

Goal 1:  Chaining When clauses together

To get the ball rolling, there is an extension method Ensure that will accept the object you wish to evaluate, encapsulate that object into a Validation class.

public static Validation<T> Enforce<T>(this T item, string argName,
    bool proceedOnFailure)
{
    return new Validation<T>(item, argName, proceedOnFailure);
}

Creating a chain of tests is accomplished with the Validation class and successive calls to the extension method When.  Validation encapsulates the object you wish to test.  In our examples that’s Employee.  Employee will be passed on to When, When executes a test and stores the results in Validation.  After the test, When returns Validation, and this creates the opportunity to execute another extension method.

public class Validation<T>
{

    public T Value { get; set; }
    public string ArgName { get; set; }
    public bool ProceedOnFailure { get; set; }
    public bool IsValid { get; set; }
    public IList<string> ErrorMessages { get; set; }

    public Validation(T value, string argName)

    {

        this.ArgName = argName;
        this.Value = value;
        this.ProceedOnFailure = false;

        //  Set to valid in order to allow for different chaining of validations.
        //  Each validator will set value relative to failure or success.
        this.IsValid = true;
        this.ErrorMessages = new List<string>();

}

     public Validation(T value, string argName, bool proceedOnFailure)
    {
        this.ArgName = argName;
        this.Value = value;
        this.ProceedOnFailure = proceedOnFailure;

        //  Set to valid in order to allow for different chaining of validations.
        //  Each validator will set value relative to failure or success.

        this.IsValid = true;
        this.ErrorMessages = new List<string>();
    }
}

Signature of When (note that we return Validation):

public static Validation<T> When<T>(this Validation<T> item, string propertyName, Compare compareTo, object propertyValue)

Before we continue on with reviewing dynamic evaluation by the When clause, you could stop here and still have a useful mechanism for creating validation routines.  That is, you could create a extension method for each validation you want to perform.  One example could be:

public static Validation<Employee> LastNameContains(

        this Validation<Employee> employee, string compareValue)

{

    var result = employee.Value.LastName.Enforce("LastName",

                  employee.ProceedOnFailure).Contains(compareValue);

    employee.IsValid = result.IsValid;

    result.ErrorMessages

            .ToList()

            .ForEach(x => employee.ErrorMessages.Add("LastName => " + x));

    return employee;

}

So instead of Ensure.When you will use Ensure.LastNameContains(“Smi”).  You will also have to create a new method for each condition.  This is still quite expressive and would go a long way to keeping things organized.  This would be more in the spirit of the specification pattern.

Goal 2:  Dynamically Evaluating Tests at Runtime

As stated, the “tests” are performed with extension method When.  When accepts the Validation object, along with propertyName and the propertyValue that you are testing.  The enum Compare determines the type of test to perform.  The comparisons are:

public enum Compare
{
    Equal = ExpressionType.Equal,
    NotEqual = ExpressionType.NotEqual,
    LessThan = ExpressionType.LessThan,
    GreaterThan = ExpressionType.GreaterThan,
    LessThanOrEqual = ExpressionType.LessThanOrEqual,
    GreaterThanOrEqual = ExpressionType.GreaterThanOrEqual,
    Contains = ExpressionType.TypeIs + 1,
    In = ExpressionType.TypeIs + 2
}

The magic of When stems from the use of Expression trees as delegates.  As defined on MSDN, an expression tree is:

Expression trees represent code in a tree-like data structure, where each node is an expression, for example, a method call or a binary operation such as x < y.

You can compile and run code represented by expression trees. This enables dynamic modification of executable code, the execution of LINQ queries in various databases, and the creation of dynamic queries.

This gives you the ability, at runtime, to dynamically evaluate an expression in the form of x = y, also referred to as a binary expression.  And in our case, we wish to evaluate:  Employee.Age = = 45.  The delegate takes care of presenting the type of the Expression and it’s components to the runtime engine.

Marc Gravell explains the difference between a delegate and an Expression as:

  • The delegate version (Func<int,int,bool>) is the belligerent manager; “I need you to give me a way to get from 2 integers to a bool; I don’t care how – when I’m ready, I’ll ask you – and you can tell me the answer”.
  • The expression version (Expr<Func<int,int,bool>>) is the dutiful analyst; “I need you to explain to me - if I gave you 2 integers, how would you go about giving me a bool?”
  • In standard programming, the managerial approach is optimal; the caller already knows how to do the job (i.e. has IL for the purpose). But the analytic approach is more flexible; the analyst reserves the right to simply follow the instructions “as is” (i.e. call Compile().Invoke(…)) – but with understanding comes power. Power to inspect the method followed; report on it; substitute portions; replace it completely with something demonstrably equivalent, etc…

.NET 3.5 allows us to create “evaluators” with Lambda Expressions compiled as delegates that will analyze an object type, the comparisons we can make, and the values we want to compare dynamically. It will then execute that tiny block of code. This is treating our code as a set of objects.  A graph representing this tree looks like so:

Each node on the tree is an Expression. Think of this as a “bucket” to hold a value, a property or an operation.  For the runtime engine to know what the type and parameters of the Expressions are, we create a delegate from the Lambda expression of that node.  In other words, we let the compiler know that we have an expression of type Employee and will evaluate whether Employee.Age is equal to 45.

To accomplish the magic at runtime, you need to set up “buckets” to hold Employee.Age or Employee.FirstName and their values with their respective type for evaluation.  Furthermore we want to be able to evaluate any type of binary expression, so our Expression will make use of generics and a tiny bit of reflection so that we will have code that “parses” the object and it’s properties dynamically.

The Extension Method When:

public static Validation<T> When<T>(this Validation<T>; item, string propertyName, Compare compareTo, object propertyValue)

Creating the delegate of the Lambda expression:

//  Determine type of parameter.  i.e. Employee
ParameterExpression parameter = Expression.Parameter(typeof(T), "x");

//  Property on the object  to compare to.  i.e. Employee.Age
Expression property = Expression.Property(parameter, propertyName);

//  The propertyValue to match.  i.e 45
Expression constant = Expression.Constant(propertyValue, propertyValue.GetType());

This takes care of the X and Y of the binary expression, but the next task is to create the comparison as an Expression as well:

Expression equality = CreateComparisonExpression<T>(property, compareTo, constant);

The type of comparison is determined by the enum Compare.  Once these steps are completed we convert the expression into a delegate with the statement:


var executeDelegate = predicate.Compile();

If you are worried about performance and the use of reflection, note that the use of static will greatly minimize this impact.  Basically you’ll take the performance hit on the first run but not on the subsequent runs.

Goal 3:  Error Reporting

For error reporting, Validation requires the name of the object with the property ArgName, and asks that you specify whether you wish to halt when there is an error.  This is accomplished with ProceedOnFailure.  An error log is created when you wish all tests to complete despite their respective results.  When you want to halt on the first error and throw an exception set the ProceedOnFailure to false.

Reporting the errors themselves takes place in each When clause, and this is implemented at the end of the When extension method.

//  Report Error handling
if(item.IsValid == false)
{
    if(item.ProceedOnFailure)
    {
        item.ErrorMessages.Add("When " + item.ArgName + "."
            + propertyName + " " + compareTo.ToString()
            + " " + propertyValue + " failed.");
    }
    else
    {
        throw new ArgumentException("When " + item.ArgName + "."
            + propertyName + " " + compareTo.ToString()
            + " " + propertyValue + " failed.");
    }
}

Finally we need to return the Validation object so that we can chain another When operation

To recap, When is a dynamic filter where at runtime, code is evaluated and created on the fly to analyze and execute a tree representing code as an object.  The expression trees can be applied to any object and evaluate the object’s properties.  Holy snikes!!!  If that doesn’t scare you how ‘bout chaining When’s together by always returning a Validation object so that you continue to apply another extension method to it.  Twisted Zen mind torture indeed, since we have complicated looking code so that we can less complicated “business code”.

Here is the source code with unit tests.

Persistent AnonymousTypes and Querying with Linq September 19, 2010

Posted by ActiveEngine Sensei in .Net, ActiveEngine, C#, Expression Trees, LINQ.
Tags: , , ,
1 comment so far

For those of us who are C# developers we are familiar with an anonymous type, defined as:

Anonymous types provide a convenient way to encapsulate a set of read-only properties into a single object without having to first explicitly define a type. The type name is generated by the compiler and is not available at the source code level. The type of the properties is inferred by the compiler.

A neat feature, but once the anonymous object falls out of scope you no longer can use it.  Hugo Bonacci, creator of CSMongo and jLinq has created a class called AnonymousType that will allow you to persist an anonymous type beyond the scope of the method that creates that object.  In other words he has process that will allow to define an anonymous object on the fly such as:


AnonymousType.Create(new
 {
 Name = name,
 Department = dept
 });

This a bit of a paradigm shift, since you now can not access the object’s properties with a Getter: instead, you use a statement like:

string name = anonymousType.Get<int>("Name");

This is great but how would you query a List?  Certainly with AnonymousType as an object you may have a collection of these objects especially since this construct lends itself to being an all purpose DTO, or even function like a SharePoint list.

Querying a collection of AnonymousTypes is possible through the use of Expression Trees.  Using this technique you can query a list of AnonymousTypes with the following syntax:

var dept13 = agentList.AsQueryable()
                .Where(x => x.Has<int>("Department", Compare.Equal, 13);

Sensei’s solution is comprised of a Has() method on the AnonymousType class, and a static class AnonPredicates with Evaluate and CreateComparisonExpression methods.  Here’s the source project with unit tests.

AnonymousType.Has

public bool Has<T>(string propertyName, Compare comparesTo, object objectValue)
{
var value = this.Ge<T>(propertyName);

return AnonPredicate.Evaluate<T>(value, comparesTo, objectValue);
}

Has() is the starting point.  The first step is to get the current value of the property you wish to compare with the test value.  Since the properties of the AnonymousType are stored in a Dictionary you have to fetch that first.  The “T” of generic type will allow you to use the Has method for all types.  “T” will also tell the AnonymousType class the type that the property should be cast to.  This allows the class to determine what to do at runtime, and eliminates the need for you write methods per type, such as HasInt(), HasDouble(), HasString(), etc.

Has<T>() then passes the value, the type of comparison and the expected object to an evaluation method.

AnonPredicate.Evaluate


public static bool Evaluate<T>(T propertyValue, Compare comparesTo, object objectValue)
 {
 ParameterExpression parameter = Expression.Parameter(typeof(T), "x");

 Expression leftConstant = Expression.Constant(propertyValue, typeof(T));
 Expression rightConstant = Expression.Constant(objectValue, objectValue.GetType());

 var comparison = CreateComparisonExpression<T>(leftConstant, comparesTo, rightConstant);
 Expression<Func<T, bool>> predicate =
 Expression.Lambda<Func<T, bool>>(comparison, parameter);

 var execDelegate = predicate.Compile();
 return execDelegate(propertyValue);
 }

Evaluate will build an expression tree of the form “x == y”, where x is the value of the AnonymousType property as Constant, == is the comparison operation, and y is the compared value as Constant.  There is no dynamic look up of the AnonymousType property as we did that previously in the calling method Has().  Here we are just building a simple expression, and will determine if invoking that expression will yield a true or a false.

The parameter comparesTo is an Enumeration representing the equality expression to use:

public enum Compare
 {
 Equal = ExpressionType.Equal,
 NotEqual = ExpressionType.NotEqual,
 LessThan = ExpressionType.LessThan,
 GreaterThan = ExpressionType.GreaterThan,
 LessThanOrEqual = ExpressionType.LessThanOrEqual,
 GreaterThanOrEqual = ExpressionType.GreaterThanOrEqual,
 Contains = ExpressionType.TypeIs + 1,
 In = ExpressionType.TypeIs + 2
 }

Evaluate calls CreateComparisonExpression to build an equality expression.

CreateComparisonExpression

public static Expression CreateComparisonExpression<T>(Expression left, Compare comparesTo, Expression right)
 {
 switch (comparesTo)
 {
 case Compare.Equal:
 return Expression.Equal(left, right);

 case Compare.GreaterThan:
 return Expression.GreaterThan(left, right);

 case Compare.GreaterThanOrEqual:
 return Expression.GreaterThanOrEqual(left, right);

 case Compare.LessThan:
 return Expression.LessThan(left, right);

 case Compare.LessThanOrEqual:
 return Expression.LessThanOrEqual(left, right);

 case Compare.NotEqual:
 return Expression.NotEqual(left, right);

 case Compare.Contains:
 MethodInfo contains = typeof(string).GetMethod("Contains", new[] { typeof(string) });
 return Expression.Call(left, contains, right);

 case Compare.In:
 //  You are accepting a List&lt;T&gt;, where T corresponds to the property on your class
 //  I.E.  List&lt;int&gt; =&gt; Employee.Age as Integer.
 //  Comparison is Left Compares Right, hence you need the type of Left
 return Expression.Call(typeof(Enumerable), "Contains", new Type[] { left.Type }, right, left);

 default:
 throw new ArgumentException("Query.CreateComparisonExpression - comparison not supported");

 }

Here a simple switch is used to build and return our equality expression.  This is really a binary expression with a left and right constant coupled with the type of equality.

Take note of Contains and In at lines 23 and 27.  These use a bit reflection to call .Net methods to perform the comparison.  Contains will work for strings, while In works like the SQl In-clause and accepts an array of objects to test.  This eliminates the need for you to write multiple Or clauses and keeps your code more readable.

Back To Evaluate()

So far, we’ve accepted an enum that tells what operation to perform and have built and expression tree that will tell us is “x >= y” is true.  We build the Lambda expression with the line:

Expression<Func> predicate =
Expression.Lambda<Func>(comparison, parameter);

The last two line of Evaluate()  will execute the expression tree for us.  Before we execute we must compile the tree:

var execDelegate = predicate.Compile();

This makes the code executable. Next we run the code:

return execDelegate(propertyValue);

The code will yield a true or false, and we return this result.

Well, so what?

Remember our goal? We wanted to create a process that would examine the innards of an AnonymousType, return true when that property met a condition. This would let us do the following:

//  Find all agents in Department 13 or 21
int[] inList = new int[2]{13, 21};
var dept13_21 = agentList.AsQueryable()
                 .Where(x=> x.Has<int>("Department", Compare.In, inList));

Enough Wax-on / Wax-off.  Show me Crane Style!

You’ve done well Daniel-san.  Now apply your knowledge.  Even with all the syntax-sugar, what if you need to find all AnonymousTypes that have an Age property whose value is between 40 and 60?  Yes you could write this way:


var agentsAgeBetween40_60 = anonAgents.AsEnumerable()
 .Where(x => ( x.Has<int>("Age", Compare.GreaterThanOrEqual, 40) & (x.Has<int>("Age", Compare.LessThanEqual, 60)))
 .ToList();

But look at that mess!  If you don’t know karate you’ll have to learn it quick because your co-workers are going to hunt you down when they have to support this code.

Sensei’s shumatsu dosa ichi (“after class exercise number one”) is the Between method:


public bool Between<T>(string propertyName, T bottomRange, T topRange)
 {
 var value = this.Get<T>(propertyName);

 return ((AnonPredicate.Evaluate<T>(value, Compare.GreaterThanOrEqual, bottomRange))
 &amp; (AnonPredicate.Evaluate<T>(value, Compare.LessThanOrEqual, topRange)));
 }

Now we write our query as:

var salesAgesBetwee40_60 = agentList.AsQueryable()
.Where(x => x.Between("Age", 40, 60));

The Fung-Shui Kid Part 11

“Oh yeah, old man,” you say.  “How about karating this compound statement:  Show me all sales agents in between the ages of 30 and 40 whose first names begin with ‘St’!”

Before Sensei left the temple he gobbled down a dose of PredicateBuilder by Joe and Ben Albahari that allows him to create dynamic compound lambda expressions!  Shumatsu dosa ni (“after class exercise number two”) proceeds thusly:

public static Expression<Func<T, bool>> True<T>() { return f => true; }
    public static Expression<Func<T, bool>> False<T>() { return f => false; }

    public static Expression<Func<T, bool>> Or<T>(this Expression<Func<T, bool>> expr1,
                                                        Expression<Func<T, bool>> expr2)
    {
        var invokedExpr = Expression.Invoke(expr2, expr1.Parameters.Cast<Expression>());
        return Expression.Lambda<Func<T, bool>>
              (Expression.OrElse(expr1.Body, invokedExpr), expr1.Parameters);
    }

    public static Expression<Func<T, bool>> And<T>(this Expression<Func<T, bool>> expr1,
                                                         Expression<Func<T, bool>> expr2)
    {
        var invokedExpr = Expression.Invoke(expr2, expr1.Parameters.Cast<Expression>());
        return Expression.Lambda&lt;Func&lt;T, bool&gt;&gt;
              (Expression.AndAlso(expr1.Body, invokedExpr), expr1.Parameters);
    }

So like Neo, now we don’t even have to dodge the bullets, as we’re writing this:


var predicate = PredicateBuilder.False<AnonymousType>();
 predicate = predicate.Or(x => x.Between<int>("Age", 40, 60));
 predicate = predicate.Or(x => x.Has<string>("Name", Compare.Contains, "St"));

 var salesAgentsBetwee40_60_STInName = agentList.AsQueryable()
 .Where(predicate);

The Return of the Sensei

Recapping all the action, we can create AnonymousTypes and access its properties in other calling methods.  In the middle of the fracas – or on the fly for you .Net types – when can create a class that serves as an anonymous type this can be passed to other methods.  As this lives as a regular object, we can create collections or lists of these objects and filter them using LINQ, Expression Trees, and the awesome PredicateBuilder.  Here are the source files.

Stay tuned – next time we leave the temple and hunt for beer!!  If you want to read more, gaze through these ancient scrolls from the masters:

Expression Tree Basics – Charlie Calvert’s Community Blog

Marc Gravell – Code, code and more code.: Explaining Expression

Marc Gravell – Code, code and more code.: Express yourself

Implementing Dynamic Searching Using LINQ (check the section regarding dynamic expressions.)

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: