C# Advanced Tutorial – Advanced programming with C# [Advanced]

This tutorial aims to give a brief and advanced introduction into programming with C#. The prerequisites for understanding this tutorial are a working knowledge of programming, the C programming language and a little bit of basic mathematics. Some basic knowledge of C++ or Java could be helpful, but should not be required.

 

Events, asynchronous and dynamic programming, the TPL and reflection

This is the third part of a series of tutorials on C#. In this part we are going to discuss exciting features of C# like dynamic programming using the DLR or using the meta-data information known as reflection. We will also extend our knowledge on the .NET-Framework by getting to know the abstract Stream class, as well as the Windows Forms UI framework by diving into the event pattern. Finally we will also learn how to keep our application responsive by using asynchronous operations as well as multiple threads and tasks. Using the Task Parallel Library we will see how one can get optimal performance out of multi-core processors.
For further reading a list of references will be given in the end. The references provide a deeper look at some of the topics discussed in this tutorial.

 

Events

In the previous tutorial we already started with Windows Forms development. A crucial concept in UI development is the running message loop. This loop connects our application to the operating system. The key question is how we can respond to certain messages in this loop. Of course the answer to that question is the concept of events.
We’ve already seen that we can store pointers to arbitrary functions in so-called delegates. A delegate type is defined by a name, a return type and a list of parameters, i.e. their types and names. This concept makes referencing methods easy and reliable. The concept of an event is quite closely related. Let’s start with an example that does not use events, but goes into the direction of a message loop communication with external code:
static void Main()
{
Application.callback = () => Console.WriteLine("Number hit");
Application.Run();
}

static class Application
{
public static Action callback;

public static void Run()
{
Random r = new Random(14);

while(true)
{
double p = r.NextDouble();

if(p < 0.0001 && callback != null)
callback
();
else if(p > 0.9999)
break;
}
}
}
 
What is the code doing? Nothing all too special, in fact we only created a new method called ApplicationRun, which has a permanent loop running. Now we have two special cases in there. In one case we want to finish the application (similar to when the user closes the program) and in the other we want to invoke an arbitrary piece of code.
In this sample code we choose a seed for the random number generator of 14. This is quite arbitrary. We only do this to get a reproducible result, that invokes the callback method more than once. The key question now is: How is this related to events?
An event is in fact a callback. However, there are a few (compiler-oriented) differences. The first difference is a language extension. Additionally to just using a delegate, we also need to use the keyword event. Once a delegate variable is marked as being an event, we cannot set it directly from outside the defining class. Instead we can only add or remove additional event handlers.
We can draw a scheme representing this relation:
The relation between event raiser and event handler.
Let’s modify our code in two parts:
void Main()
{
Application.callback += () => Console.WriteLine("Number hit");
Application.Run();
}

static class Application
{
public static event Action callback;

public static void Run()
{
Random r = new Random(14);

while(true)
{
double p = r.NextDouble();

if(p < 0.0001 && callback != null)
callback
();
else if(p > 0.9999)
break;
}
}
}
 
Now we see that we need to use the self-add operator (+=) for adding an event handler. Removing an event handler is possible by using the self-subtract operator (-=). This is only possible if an event handler to the given method already exists. Otherwise nothing could be removed of course (this will not result into exceptions, but it could result in unexpected behavior, e.g. if one thinks he removes the actual handler, while removing something different that just matches the required signature).
Obviously we could use more handlers for the same event. So the following is also possible in our Main method:
Application.callback += () => Console.WriteLine("Number hit");
Application.callback += () => Console.WriteLine("Aha! Another callback");
Application.Run();
 
Now the two methods would be invoked on calling the delegate instance inside our class Application. How is that possible? The magic lies in two things.
  1. The compiler creates methods that will be called on using += and -= in combination with our defined event. The corresponding method will be called once we use the variable with one of those operators.
  2. The compiler uses the Combine method of the Delegate class to combine multiple delegates in one delegate by using +=. Additionally adding or removing handlers is thread-safe. The compiler will insert lock statements by using the CompareExchange instruction.
The outcome is quite nice for us. Using the keyword event, we can not only mark delegates as something special (as events to be precise), but the compiler also constructs additional helpers that become quite handy.
We will see later on that while adding or removing event handlers is thread-safe, firing them is not. However, for the moment we are happy with the current state, being able to create our own events and wiring up event handlers to have callbacks once an event is being fired.

 

The .NET standard event pattern

In theory an event could expect it’s handlers to return a value. However, this is only theory and related to the fact that an event only uses a delegate type instance. In reality an event is fired without expecting any return value, since the event’s originator does not require any handlers to be active.
In practice, it is possible to re-use an event handler with different instances of the same type or even different instances of different types, which have the same event pattern. While the last one might be not so good (depending on the scenario it is indeed a good solution, but usually we want to avoid this), the first case one might happen quite often. Let’s consider the following code snippet:
void CreateNumberButtons()
{
for(int i = 1; i <= 9; i++)
{
Button bt = new Button();
bt
.Text = i.ToString();
bt
.Dock = DockStyle.Top;
bt
.Click += MyButtonHandler;
this.Controls.Add(bt);
}
}
 
Here we are creating 9 buttons, which will be added to the current Form‘s list of controls. We assign each button a handler for the event named Click. Instead of assigning different handlers, we always re-use the same handler. The method, which should be called once the click event is fired is named MyButtonHandler. The question now is: How can we distinguish between the various buttons in this handler? The answer is simple: Let the first argument of the handler be the sender (originator) of the event! This is how our method looks like:
void MyButtonHandler(object sender, EventArgs e)
{
Button bt = sender as Button;

if(bt != null)
MessageBox.Show(bt.Text);
}
 
It is also possible to specialize this signature in two ways:
  1. We could use a more specialized type for the sender. Most .NET events will use Object as the sender’s type, which allows any object to be the originator. It is important to realize that this only applies to the signature of the event, not the real event, e.g. the Click event of a Button.
  2. We could use a more specialized version of EventArgs. We will now discuss what this type represents.
The second argument is an object which transports variables / a state from the event’s origin to the handler. Some events just use a dummy type called EventArgs, while others use a more specialized version of EventArgs, which contains some properties (or even methods). In theory this argument does not require to be derived from EventArgs, however, in practice it is a good way of marking a type as being used as a transport package.
Now we’ve already seen what the .NET standard event pattern is. It is a delegate in form of
delegate void EventHandler(object sender, EventArgs e);
 
where Object and EventArgs might be more specialized depending on the event. Let’s have a look at an example of a more specialized version. Every form has an event called MouseMove. This event uses another delegate named MouseEventHandler. The definition is as follows:
delegate void MouseEventHandler(object sender, MouseEventArgs e);
 
This handler does not look much different. The only difference is that a different type of package is used. Instead of the dummy (empty) EventArgs package, it is using the derived MouseEventArgs type. This package contains properties, which are filled with the corresponding values when firing the event.
class Form1 : Form
{
Label info;

public Form1()
{
info
= new Label();
info
.Dock = DockStyle.Bottom;
info
.AutoSize = false;
info
.Height = 15;
info
.TextAlign = ContentAlignment.MiddleCenter;
this.Controls.Add(info);
this.MouseMove += HandleMove;
}

void HandleMove(object sender, MouseEventArgs e)
{
info
.Text = string.Format("Current position: ({0}, {1}).", e.X, e.Y);
}
}
 
In the given example we are creating a new Form called Form1. We add a Label to it, which will be docked at the bottom of the form. Now we are wiring up an event handler for the MouseMove event of the form. The last part is crucial, since it will not work when the mouse is moving over the Label. While some UI frameworks (like HTML, WPF, …) have the notation of bubbling events, i.e. events that will be fired on all qualified layers and not just the top-most layer, we have to live without this feature in Windows Forms.
Now our event handler is able to retrieve information related to the event. In this case we have access to properties like X and Y, which will give us values for the X (from left) and Y (from top) value relative to the control that raised the event, which is the Form itself in this case.

 

Reflection

A programmer’s job description usually does not say a word about efficient or effective code. Also payrates are usually not on a per line of code basis. So copy / paste is always an option! Nevertheless, most programmers are lazy and tend to search for more efficient ways, which result in less lines per code (no copy / paste) and more robust code (one change in the code triggers all other required changes – nothing breaks).
The CLR stores assemblies in a special way. Besides the actual (MSIL) code, a set of metadata information related to the assembly is saved as well. This metadata includes information about our defined types and methods. It does not include the exact algorithms, but the scheme. This information can be accessed and used with a concept called reflection. There are multiple ways of using reflection:
  1. Getting a Type instance at runtime by calling GetType() of an arbitrary object (instance).
  2. Getting a Type instance at compile-time by using typeof() of an arbitrary type, e.g. typeof(int).
  3. Using the Assembly class to load an assembly (the current one, a loaded assembly or an arbitrary CLR assembly from the file system).
Of course there are also other ways, but in this tutorial we are only interested in those three. Of those three we can skip the second one, since (in the end) it will boil down to the first one. So let’s dive into this with a simple example in the form of the so-called Factory design pattern. This pattern is used to create a specialized version of a type depending on some parameters. Let’s start by defining some classes:
class HTMLElement
{
string _tag;

public HTMLElement(string tag)
{
_tag
= tag;
}

public string Tag
{
get { return _tag; }
}
}

class HTMLImageElement : HTMLElement
{
public HTMLImageElement() : base("img")
{
}
}

class HTMLParagraphElement : HTMLElement
{
public HTMLParagraphElement() : base("p")
{
}
}
 
We now have three classes, with the HTMLElement class being independent and the other two being derived from it. The scenario should now be quite simple: Another programmer should not have to worry about which class to create for what kind of parameter (which will be a simple string in this case), but should just call another static method called CreateElement in a class called Document:
class Document
{
public static HTMLElement CreateElement(string tag)
{
/* code to come */
}
}
A classical way to implement this factory method would be the following code:
switch(tag)
{
case "img":
return new HTMLImageElement();
case "p":
return new HTMLParagraphElement();
default:
return new HTMLElement(tag);
}
 
Now the problem with this code is that we have to specify the tag name over and over again. Of course we could change the “img” or “p” strings to constants, however, we still have to maintain a growing switchcase block. Just adding new classes is only half of the job. This results in a maintainability problem. A good code would maintain itself. Here is where reflection comes to help.
Let’s rewrite the implementation using reflection:
class Document
{
//A (static) key-value dictionary to store string - constructor information.
static Dictionary<string, ConstructorInfo> specialized;

public static HTMLElement CreateElement(string tag)
{
//Has the key-value dictionary been initialized yet? If not ...
if(specialized == null)
{
//Get all types from the current assembly (that includes those HTMLElement types)
var types = Assembly.GetCallingAssembly().GetTypes();

//Go over all types
foreach(var type in types)
{
//If the current type is derived from HTMLElement
if(type.IsDerivedFrom(typeof(HTMLElement)))
{
//Get the constructor of the type - with no parameter
var ctor = type.GetConstructor(Type.Empty);

//If there is an empty constructor (otherwise we do not know how to create an object)
if(ctor != null)
{
//Call that constructor and treat it as an HTMLElement
var element = ctor.Invoke(null) as HTMLElement;

//If all this succeeded add a new entry to the dictionary using the constructor and the tag
if(element != null)
specialized
.Add(element.Tag, ctor);
}
}
}
}

//If the given tag is available in the dictionary then call the stored constructor to create a new instance
if(specialized.ContainsKey(tag))
return specialized[tag].Invoke(null) as HTMLElement;

//Otherwise this is an object without a special implementation; we know how to handle this!
return new HTMLElement(tag);
}
}
 
It is obvious that the code got a lot longer. However, we will also realize that this is robust solution that works perfectly for this special case. What does the code do exactly? Most of the code is actually spent in building up a dictionary, that is then used to map certain scenarios (in this case certain tags) to a proper type (in this case a proper method in form of the corresponding constructor). After this is done (this has to be invoked only once), the former switchcase reduces to those three line:
    if(specialized.ContainsKey(tag))
return specialized[tag].Invoke(null) as HTMLElement;
return new HTMLElement(tag);
 
That’s short and easy, isn’t it? That’s the beauty and magic of reflection! The code now extends itself when adding new classes:
class HTMLDivElement : HTMLElement
{
public HTMLDivElement() : base("div")
{
}
}

class HTMLAnchorElement : HTMLElement
{
public HTMLAnchorElement() : base("a")
{
}
}
 
We now just added two more classes, but we do not have to care about the maintenance of our factory method. In fact everything will work out of the box!
Let’s step aside for one second and consider another example of using reflection. In the previous tutorial we had a look at anonymous objects. One way to use anonymous objects has been to initialize them with the var keyword (for enabling type inference). Therefore we could do the following:
var person = new { Name = "Florian", Age = 28 };
 
This works perfectly and we can access the members of the anonymous object within the current scope. However, once we have to pass this kind of object we are missing the correct type. We now have three options available:
  1. We do not use an anonymous type, but create a class to cover the data encapsulation.
  2. We use the DLR as presented in the next section.
  3. We change the specific argument type of the caller method to be a very general Object type and use reflection.
Since the current section discusses reflection we will try option number three. Let’s have a look at the code snippet in a bigger context:
void CreateObject()
{
var person = new { Name = "Florian", Age = 28 };
AnalyzeObject(person);
}

void AnalyzeObject(object o)
{
/* use reflection here */
}
 
The question now is: What’s the purpose of the AnalyzeObject method? Let’s assume that we are only interested in the properties of the given object. We want to list their name, type and current value. Of course the GetType() method will play a very important role here. The implementation could look like the following code snippet:
//Get the type information
Type type = o.GetType();
//Get an array with property information
PropertyInfo[] properties = type.GetProperties();

//Iterate over all properties
foreach(var property in properties)
{
//Get the name of the property
string propertyName = property.Name;
//Get the name of the type of the property
string propertyType = property.PropertyType.Name;
//Get the value of the property given in the instance o
object propertyValue = property.GetValue(o);
Console.WriteLine("{0}\t{1}\t{2}", propertyName, propertyType, propertyValue);
}
 
This all works quite nicely. The lesson here is regarding the GetValue method of the PropertyInfo class. This method is obviously interested in getting the value of an instance that has this specific property. It is important to differentiate between the pure type information, obtained by using GetType on an instance, and an instance. The instance is built upon the scheme described in a type. The type itself does not know about any instance.
However, there is special case, in which it is sufficient to only pass in null as instance. Consider the following case:
class MyClass : IDisposable
{
static int instances = 0;
bool isDisposed;

public static int Instances
{
get { return instances; }
}

public MyClass()
{
instances
++;
}

public void Dispose()
{
isDisposed
= true;
instances
--;
}

~MyClass()
{
if(!isDisposed)
Dispose();
}
}
 
This class keeps track of its instances. What if we want to get the value of the property Instances by using reflection? Since Instances is a static property, the property itself is independent of a particular class instance. So the following code would work in this case:
var propertyInfo = typeof(MyClass).GetProperty("Instances");
var value = propertyInfo.GetValue(null);
 
Reflection requires null to be passed in as an argument quite often, however, one should always read the documentation related to a method before deciding what parameters would be the best choice for a special case.
Before stepping into the world of dynamic programming we will investigate one other interesting option with reflection: obtaining method information. Of course there are several other interesting possibilities, e.g. reading attributes or creating new types on the fly with the Emit method.
Method information is quite similar to obtaining property information and even more similar to obtaining constructor information. In fact PropertyInfo, MethodInfo and ConstructorInfo all inherit from MemberInfo, with MethodInfo and ConstructorInfo indirectly inheriting from it while directly inheriting from MethodBase.
Let’s do the same thing as before with the anonymous object, but now reading out all available methods:
//Get the type information
Type type = o.GetType();
//Get an array with method information
MethodInfo[] methods = type.GetMethods();

//Iterate over all methods
foreach(var method in methods)
{
//Get the name of the method
string methodName = method.Name;
//Get the name of the return type of the method
string methodReturnType = method.ReturnType.Name;
Console.WriteLine("{0}\t{1}", methodName, methodReturnType);
}
 
Reading out a value is much harder in this case, since we could only do this if a method has no parameters. Otherwise we would be required to find out which parameters are actually required. Nevertheless this is possible and could lead to a very simple unit testing tool, which only looks at public methods and tries to call them with default values (a default value would be null for any class and a logical default value for any structure, e.g. Int32 has a default value of 0).
If we execute the code above we will be surprised. This is the output that we probably expected, knowing that any type derives from Object, which gives us already 4 methods:
Equals  Boolean
GetHashCode Int32
ToString String
GetTypes Type
 
However, this is the output that has been displayed:
get_Name    String
get_Age
Int32
Equals Boolean
GetHashCode Int32
ToString String
GetTypes Type
 
We see that two new methods have been inserted by the compiler. One method is called get_Name and returns a String object, while the other one is called get_Age and returns an integer. It is quite obvious that the compiler did transform our properties into methods. So overall any property is still a method – the GetProperty() or GetProperties() methods are just short-ways to access them without iterating over all methods.
In the end reflection can therefore also teach us a lot of things about MSIL and the C# compiler. All the things we investigated here is reflected (pun intended) in the following scheme, which shows the object-tree that is used by reflection.

 

Dynamic programming

In the previous section we’ve already mentioned that another possibility to pass anonymous objects would be to use dynamic programming. Let’s see some code before we will actually dive into the Dynamic Language Runtime (DLR), which enables us to use dynamic programming in contrast to static programming with the CLR:
void CreateObject()
{
var person = new { Name = "Florian", Age = 28 };
UseObject(person);
}

void UseObject(object o)
{
Console.Write("The name is . . . ");
//This will NOT work!
Console.WriteLine(o.Name);
}
 
We are doing the exact some thing as before, but now we are not interested in analyzing information about the type of the given instance, but we are interested in actually using some properties or methods of the given Object. Of course the code above does not compile, since Object does not have a Name property. But what if the actual type has a property with this name? Is there a way to tell the compiler that it should ignore the error and that we try to map it at runtime? The answer is of course yes. As with var, the magic lies in a keyword type called dynamic. Let’s change our code:
void UseObject(dynamic o)
{
Console.Write("The name is . . . ");
//This will work!
Console.WriteLine(o.Name);
}
 
Everything works as before. All we had to do is change the signature of the method. If we now type in o. in the body of the method UseObject, we will not get any IntelliSense support. This is a little bit annoying, but on the other hand we would have no IntelliSense support when using reflection as well!
So is this the end of the story? Of course not! First of all we need to realize that every standard CLR object can be treated as a dynamic object. So the following all works:
int a = 1;//a is of type Int32
var b = 1;//b is of type Int32 (inferred)
dynamic c = 1;//c is of type dynamic -- only known at runtime (but will be Int32)
object d = 1;//d is of type object, but the actual type is Int32
 
This seems to make no difference between the various definitions. However, there are actually a lot of differences. Let’s use those variables:
var a2 = a + 2;//works, Int32 + Int32 = Int32
var b2 = b + 2;//works, Int32 + Int32 = Int32
var c2 = c + 2;//works, dynamic + Int32 = dynamic
var d2 = d + 2;//does not work, object + Int32 = undef. 
 
While int is a real type (mapped to Int32), var is (in this case) only a keyword for the compiler to infer the type (which will be Int32). object is a real type, which boxes the actual type in this case from Int32. Now the first three operations worked – are they equal? Again we have to answer with no. Let’s have a look at this code snippet:
a = "hi";//Ouch!
b
= "hi";//Ouch!
c
= "hi";//Works!
d
= "hi";//Works!
 
Here the first two assignments will result in compilation errors. A string cannot be assigned to a variable of type Int32. However, a string can be casted into an Object. Also dynamic means that the actual type might change to whatever type at runtime.
So far we learned that dynamic variables are in fact quite dynamic. They provide all capabilities of the actual type behind a variable, but without the ability to change that type. However, with great power comes great responsibility. This might result in problems as in the following code snippet:
dynamic a = "32";
var b = a * 5;
 
The compiler will not complain about using the multiplication with a `String. However, at runtime we will get a really bad exception at this point. Detecting such lines might look easy in the example, but in reality it looks much more like the following code snippet:
dynamic a = 2;
/* lots of code */
a
= "Some String";
/* some more code */
var b = 2;
/* and again more code */
var c = a * b;
 
Now it’s not so obvious any more. The complication arises due to the number of possible code paths. Dynamic programming offers some advantages in the area of mapping functionality. For instance using methods with dynamic types will always result in taking the closest matching overload. Let’s have a look at an example to see what this means:
var a = (object)2;//This will be inferred to be Object
dynamic b = (object)2;//This is dynamic and the actual type is Int32
Take(a); //Received an object
Take(b); //Received an integer

void Take(object o)
{
Console.WriteLine("Received an object");
}

void Take(int i)
{
Console.WriteLine("Received an integer");
}
 
Even though we assigned a variable of type Object to a dynamic variable the DLR still managed to pick the right overload. A question everyone has to answer for himself is: Is that what we really wanted? Usually if we already picked dynamic programming the answer is yes, but coming from static programming the answer is no. Still it’s nice to know that such a behavior is possible with C# and the DLR.
Up until here, we should have learned the following key points:
  • dynamic tells the compiler to let the variable be handled at runtime by the DLR.
  • Dynamic variables can be combined with any other variable resulting in a dynamic instance again.
  • The CLR type is still available, however, only at runtime. This makes predictions at compile-time impossible.
  • If an operation or method call with a dynamic variable is not available, we will get ugly exceptions.
One thing we might be interested right now: Where can we use this kind of dynamic programming? So let’s have a look at an interesting picture:
The Dynamic Language Runtime and its relation to other languages.
We see that the DLR is the layer that connects .NET languages (like C#) or flavors (like IronRuby) to various kinds of objects (like CLR objects or Python dynamic objects etc.). This means that anything that is dynamic supplies a binding mechanism (we could also write our own) that could be supported in a .NET language. This means that we can actually write a C# program that interacts with a script written in Python!
There are two more key lessons that should be learned in this section. The first will show us how to create our own dynamic type. The second one will give us some insight about practical usage of the DLR.
The DLR defines its types by implementing a special kind of interface. In this section we do not care about the exact details, but rather on some interesting classes, that already implement this interface. Right now there are two really interesting classes, called ExpandoObject and DynamicObject. As an example we will now build our own type based on the DynamicObject. Let’s name this type Person.
class Person : DynamicObject
{
//This will be responsible for storing the properties
Dictionary<string, object> properties = new Dictionary<string, object>();

public override bool TryGetMember(GetMemberBinder binder, out object result)
{
//This will get the corresponding value from the properties
return properties.TryGetValue(binder.Name, out result);
}

public override bool TrySetMember(SetMemberBinder binder, object value)
{
//binder.Name contains the name of the variable
properties
[binder.Name] = value;
return true;
}

public Dictionary<string, object> GetProperties()
{
return properties;
}

public override string ToString()
{
//Our object also has a specialized string output
StringBuilder sb = new StringBuilder();
sb
.AppendLine("--- Person attributes ---");

foreach (var key in properties.Keys)
{
//We use the chaining property of the StringBuilder methods
sb
.Append(key).Append(": ").AppendLine(properties[key].ToString());
}

return sb.ToString();
}
}
 
How can this type be used? Let’s view an example:
dynamic person = new Person();
person
.Name = "Florian";
person
.Age = 28;
Console.WriteLine(person);
person
.Country = "Germany";
Console.WriteLine(person);
 
This makes extending the existing object quite easy and everything works out-of-the-box like magic. Let’s now go on and look at a practical example. Usually one would pick communication with a dynamic scripting language (JavaScript, Python, PHP, Ruby, …), however, we will do something different.
Using the .NET-Framework class XmlDocument we are able to access the node members in a more elegant way. The usual way would look like the following code snippet:
var document = new XmlDocument("path/to/an/xml/document.xml");
var element = document.GetElement("root").GetElement("child");
 
Using the DLR we can rewrite this to become:
dynamic document = new XmlDocument("path/to/an/xml/document.xml");
var element = document.root.child;
 
It is important to notice that element will be of type dynamic again, since document is dynamic. Also it should be noted again that if either root nor child exist as nodes, we will face some serious exceptions.
Another use-case of the DLR is in interop with COM-applications like Microsoft Office (Access, Excel, Word, …).

 

Accessing the file system

Before we go over into the interesting topic of multi-threading, concurrent and asynchronous programming, we will start using the System.IO namespace of the .NET-Framework. All classes in these methods deal with input / output, mostly with the file system.
Let’s consider some simple tasks: We want to get information about a directory or we want to know everything about a certain file. This means we need to read out information of the file system. However, due to a lucky coincidence Windows knows everything and has some good APIs to do the communication. Thanks to the .NET-Framework those APIs are accessible in a object-oriented way for us.
Where should we start? The static class Path contains a lot of useful helpers and general variables like the path separator (Windows uses a backslash). We also have direct classes like Directory and File, which can be used to do some immediate actions. Additionally we have data encapsulations like DirectoryInfo, FileInfo or the DriveInfo class. The whole ACL (Access Control List) model can also be accessed using special classes and methods.
Let’s create a sample project using Windows Forms. On the main windows we are placing two buttons and a ListBox control. The two buttons should get an event handler for the Click event, the ListBox control should get an event handler for the SelectedIndexChanged event. This event will be called once the user changes the currently selected item in this control.
Our sample application should load all files (we are only interested in the names of these files) from a certain directory, as well as all currently used drive letters. When we press the first button the Listbox control should be populated. The second press should only be enabled if we have selected a valid file in the ListBox control. This Button control should then trigger a MessageBox to show a text representation of the content of the selected file.
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}

//What happens if the first button is clicked?
private void button1_Click(object sender, EventArgs e)
{
//String Literals have an @ symbol before the string
//There \ is no escape sequence starter
string path = @"C:\Windows";

//Reading out the files of the given directory
string[] files = Directory.GetFiles(path);

//Reading out all drives
DriveInfo[] drives = DriveInfo.GetDrives();

//Adding all files
foreach (var file in files)
{
//The listbox collection takes arbitrary objects as input
//and uses the ToString() for drawing
FileInfo fi = new FileInfo(file);
listBox1
.Items.Add(fi);
}

//Adding all drives, however, one-by-one.
foreach (var drive in drives)
{
listBox1
.Items.Add(drive);
}
}

//What happens if the second button is clicked?
private void button2_Click(object sender, EventArgs e)
{
//Just be sure that we really have an item selected
var fi = listBox1.SelectedItem as FileInfo;

//if we have an item selected AND that item is of type FileInfo then ...
if (fi != null)
{
//Read that file and show it in the MessageBox (beware of large and non-text files!)
string text = fi.OpenText().ReadToEnd();
MessageBox.Show(text);
}
}

//What if we change the selected index (i.e. pick another item)?
private void listBox1_SelectedIndexChanged(object sender, EventArgs e)
{
//We only want to allow button2 to be enabled if a file is selected
button2
.Enabled = (listBox1.SelectedItem as FileInfo) != null;
}
}
 
In this example we are already using various types from the System.IO namespace. We are actively using Directory to get a String array with file names. We are also using FileInfo to encapsulate a given filename (String) as a file object. Additionally we used DriveInfo to obtain an array of DriveInfo instances. A DriveInfo object is an encapsulation of everything related to a certain drive. It would have been also possible to use the static GetFiles method of the DirectoryInfo class. This method would have given us an array of FileInfo objects directly.
Now that we have an impression how the communication with the file system looks like, it is time to start with reading and writing files.

 

Streams

The reason that the System.IO namespace is not named System.FS or System.FileSystem is quite simple: Communication with the file system is just one use-case of input and output – there are many more. In fact placing some string in the console is already a form of output. Reading a string or arbitrary keyboard input from the console is also input.
Some input and output operations are managed in a stream, i.e. a sequence of data elements which is made available over time. A stream can be thought of as a conveyor belt that allows items to be processed one at a time rather than in large chunks. There are all kinds of streams, like a memory stream, a file stream, input stream (e.g. from the keyboard) or output stream (e.g. to the display). Every IO operation is about streaming data. This is the reason that the System.IO namespace is mostly about the Stream class and its implementations.
The Stream class itself is abstract, since every stream of data is dependent of the corresponding device, e.g. HDD, RAM, ethernet, modem. Therefore there has to be a specific implementation for the corresponding device. Sometimes it might make sense to implement our own Stream. Reading a file is possible by using a specialized class like FileStream. In the previous section we’ve already seen that there are some helper methods available. There we used the OpenText method of the FileInfo class to create a StreamReader object. This is a specialized version of the TextReader class, which requires a Stream. In the end we could just use the ReadToEnd method and do not have to worry about how to use a Stream.
Let’s see how we could use the FileStream class. In the code snippet below we will open a file called test.txt:
//Open the file test.txt (for reading only)
FileStream fs = new FileStream("test.txt", FileMode.Open);
//Allocate some memory
byte[] firstTenBytes = new byte[10];

//Read ten bytes and store them in the allocated memory starting
while(fs.Read(firstTenBytes, 0, 10) != 0)
{
Console.WriteLine("Could read some more bytes!")
}

//Quite important: Closing the stream will free the handle!
fs
.Close();
 
The code above opens a file and reads 10 bytes until there are no more bytes to read. We could also advance byte-by-byte using the ReadByte method. While Read returns the actual number of bytes read (in this case it would be 10 until the end is reached, where any number from 0 to 10 is possible), ReadByte returns the actual value of the byte in form of an Int32. The reason for this is that the byte type is an unsigned 8 bit integer, which has no possibility of telling us that the end of the stream is reached. Using an Int32 we can check if we reached the end of the stream (in this case the end of the file) by checking if the result has been -1.
Writing a file is quite similar to reading it. Here we just use FileMode.Create to specify that we do not want to open an existing file, but create a new one. Now methods like WriteByte or Write can be invoked since the stream is writable. One thing we’ve seen above is now getting a lot more crucial: After our operations we have to close the file by disposing / closing the Stream object. This can be achieved with the method Close.
There are two more important concepts in the Stream class:
  1. Depending on the device, bytes could be buffered before any actual write. Therefore if immediate writing is required we have to use the Flush method.
  2. Every stream has a Position property, which is the current insertion / marker point. Any write or read operation will take place starting at that point. Therefore if we used a stream to go from start to end, we have to reset the Position marker to the beginning in order to start again.
Before we go on we should have a closer look at the already mentioned TextReader and TextWriter classes. Those two classes do not derive from Stream since they serve a different purpose. Those two classes are specialized on reading or writing text. The text could be in a raw byte array, in a string or in a stream. For each scenario there is specific implementation. Since this section is about streams, we will introduce the implementations in form of StreamReader and StreamWriter.
Why should we use a StreamReader instance for handling a FileStream of a text file? The magic word here is: Encoding! In order to save text we need to specify how characters are mapped to numbers. The first 127 numbers are always mapped according to the ASCII standard. In this standard we have normal letters like a being 0x61, A being 0x41 as well as numbers like 0 being 0x30. However, we also have special characters like a new line \n being 0x0a or a backspace \b being 0x08. The problem now is that the other (usually more regional common) characters are depending on the mapping. There are several encodings like UTF-8 or UTF-16 or Windows-1252. The main question is: How to find out and how to use.
The .NET-Framework has a (quite extensive list) of available encodings. Any Encoding instance has methods like GetString or GetChar, however, the TextReader / TextWriter methods use them already. We can either specify an encoding when creating a StreamReader or StreamWriter or let the object detect the encoding. While the currency of a Stream object is a Byte, the currency of a TextReader is a Char.
Let’s see how we can use the StreamWriter to create some text:
StreamWriter sw = new StreamWriter("myasciifile.txt", false, Encoding.ASCII);
sw
.WriteLine("My First ASCII Line!");
sw
.WriteLine("How does ASCII handle umlauts äöü?");
sw
.Close();
 
From the first tutorial we know that the Char datatype is a 16-bit unsigned integer. Therefore we might know or not that C# uses UTF-16 to store characters. While a UTF-8 displayed character can consist of 1 to 4 UTF-8 characters, a UTF-16 displayed character can consist of 1 to 2 UTF-16 characters. The minimum payload here is 2 or twice as much as with UTF-8. This should just motivate us to think about encoding when actually passing some characters from a memory object that’s being used in .NET purely for other systems. If the encoding is different or not expected, the (displayed) text will differ from the original one.
Now we are actually on the point where things are starting to get interesting. The Stream class also contains methods that end with async. In some cases we actually might be more interested in using ReadAsync and WriteAsync than their sequential counterparts Read and Write. In the next sections we will dive into asynchronous programming using C#.

 

Threads

One of the first examples in this tutorial has shown a simulation of the Application.Run() message loop. The code has been designed to exit quite quickly, however, commenting the exit-criteria will result in a permanent loop with firing a lot of events. If we would place a lot of code in our event-handler, then we would actually block the event loop from continuation.
This is exactly the problem that happens quite often in UI applications. An event (let’s say a Click event of a Button control) has been fired and we perform a lot of operations in our handler. Worst of all we are not only performing some calculations, but some IO operations like writing some file or downloading some data. If the operation takes quite long (let’s say more than a second) the user will experience that the application becomes unresponsive for that time. The reason is quite simple: as before, the message loop is blocked from continuation, which prevents those other events (like moving the form, clicking on some other button or something different) from being fired. After our operation, all queued messages will be handled.
Of course we therefore do not want to block the UI thread. Since the messages in the message loop get pumped once the application is idle, the idle state is very important. Doing too much work in the UI thread (like in an event handler) will result in a non-responsive app. Therefore the OS created two models: threads and callbacks. Callbacks are just events, which we do know already. If we can respond to a change of state with an event, then we should pick this way. Otherwise we might pick the solution of spawning a thread which uses polling to get notified in case of change of state. In this section we will look at how we can create and manage threads in the .NET-Framework.
Every application (no matter if console or graphical) comes already with 1 thread: this is the application / GUI thread. If we use multiple threads we might have the advantage of a faster execution with several (CPU) cores. The reason is that the OS distributes the threads across different cores (if available), since threads are considered to be independent units of work in the context of a process. Even on one core the OS is handling threads by assigning them CPU time. So even on just one core we might have an advantage in form of a more responsive UI. When the OS schedules CPU time it takes into account that the thread is only in a spin-lock state and does not require the maximum computing time.
Now the remaining question is: How can we create threads? First of all we need a method that should run in that thread. The first thread in every application is started by the standard process model – that is the Main method in C#. Now that we are programming the other threads by hand, we can start whatever method.
The class Thread represents a thread, with the constructor requiring a delegate of the method to run. This class is available in the namespace System.Threading. Let’s look at an example:
static void Main(string[] args)
{
//Creating a thread and starting it is straight forward
Thread t = new Thread(DoALotOfWork);
//Just pass in a void Method(), or void Method(obj) and invoke start
t
.Start();

//Wait for the process to be finished by pressing the ENTER key
Console.ReadLine();
}

static void DoALotOfWork()
{
double sum = 0.0;
Random r = new Random();
Debug.WriteLine("A lot of work has been started!");

//some weird algorithm
while(true)
{
double d = r.NextDouble();
//The Math class contains a set of useful functions
if (d < 0.333) sum += Math.Exp(-d);
else if (d > 0.666) sum += Math.Cos(d) * Math.Sin(d);
else sum = Math.Sqrt(sum);
}

Debug.WriteLine("The thread has been stopped!");
}
 
In the example above we start a new thread that uses the DoALotOfWork method as entry point. Now we could enter some text or stop the program while work is still being done. This is because threads are running concurrently. If we have two threads, then two things can be done. No one can tell us about the order of work. However, there is one big difference between the new thread and the existing thread: Exceptions from worker threads will bubble up to the process thread!
Also the following considerations should be taken into account:
  • Spawning multiple threads results in overhead, which is why we should consider using the ThreadPool class for many threads.
  • Changing the UI from a worker thread is not possible and will result in exceptions.
  • We have to avoid race conditions, i.e. solving non-independent problems becomes challenging since the order of execution is not guaranteed.
 Therefore one of the biggest problems is: How to communicate between threads?

 

Thread-communication

Right now we only know what threads are and how to start them. At this point the threading concept looks more like overhead, since we might gain a responsive application, however, we have no way to communicate back any result of the thread’s execution.
In order to synchronize threads, C# introduces the lock keyword. The lock statement blocks usage on certain lines of code, which are condensed in a scope block. This is like a barrier. Barriers help to reduce race conditions, with the barrier status (set / unset) being determined by a pointer (memory address). A pointer can be given by a reference type like any Object.
Let’s have a look at a short example using two threads:
//This object is only used for the locks
static Object myLock = new Object();

static void Main(string[] args)
{
//Create the two threads
Thread t1 = new Thread(FirstWorker);
Thread t2 = new Thread(SecondWorker);

//Run them
t1
.Start();
t2
.Start();

Console.ReadLine();
}

static void FirstWorker()
{
//This will run without any rule
Console.WriteLine("First worker started!");

//The rule for the following block is: only enter
//when myLock is not in use, otherwise wait
lock (myLock)
{
Console.WriteLine("First worker entered the critical block!");
Thread.Sleep(1000);
Console.WriteLine("First worker left the critical block!");
}

//Finally print this
Console.WriteLine("First worker completed!");
}

static void SecondWorker()
{
Console.WriteLine("Second worker started!");

//The rule for the following block is: only enter
//when myLock is not in use, otherwise wait
lock (myLock)
{
Console.WriteLine("Second worker entered the critical block!");
Thread.Sleep(5000);
Console.WriteLine("Second worker left the critical block!");
}

//Finally print this
Console.WriteLine("Second worker completed!");
}
 
If we run the program multiple times we will (usually) get different outputs. This is normal since we cannot tell which thread will be started first by the operating system. In fact our program is just telling the OS to start a thread, i.e. the OS can decide when the operation is performed.
Using the lock statement it is quite simple to mark critical blocks and ensure coherence in a multi-thread program. However, this does not solve our problem with GUI programming, where we are not allowed change the UI from a different thread than the GUI thread.
To solve this problem every Windows Forms control has a method called Invoke. Also other UI frameworks like WPF have something similar. In WPF we can use the (more general) Dispatcher property. However, the most general way of doing thread-safe UI calls is over the SynchronizationContext class. This class is available everywhere, even for console applications. The idea is the following: A direct communication between threads is not possible (since it is not thread-safe), however, one thread might call (or start) a method in the context of the other thread.
What does that mean? Thinking of a GUI we can easily construct a use case. We create a Windows Forms application with a Label, aProgresBar and a Button control. Once the user presses the button a new thread is started, which performs a (long-running) computation. This computation has some fixed points, where we know that some percent of the overall computation have already been done. At those points we use a globally available SynchronizationContext instance to start a method in the GUI thread, which sets the ProgressBar value to a given value. At the end of the computation we are using again the SynchronizationContext to change the Text property of Label to a given value.
Let’s have a look at the scheme of this example:
public class Form1 : Form
{
SynchronizationContext context;
bool running;

public Form1()
{
InitializeComponent();
//The Current property gets assigned when a Form / UI element is created
context
= SynchronizationContext.Current;
}

void ButtonClicked(object sender, EventArgs e)
{
//We only want to do the computation once at a time
if(running)
return;

running
= true;
Thread t = new Thread(DoCompute);
t
.Start();
}

void DoCompute()
{
/* Start of long lasting computation */
context
.Send(_ => {
progressBar1
.Value = 50;
}, null);
/* End of long lasting computation */
context
.Send(_ => {
progressBar1
.Value = 100;
label1
.Text = "Computation finished!";
running
= false;
}, null);
}
}
The static property Current of the SynchronizationContext carries the (set) sync. context of the current (!) thread. Therefore if we want to use the value of Current mapping to the GUI thread, we need to store it while being in the GUI thread. This property is not set automatically. It has to be set somewhere. In our case, however, this property is set by the Windows Forms Form instance.
Now that we have a rough understanding how we can avoid race conditions and cross-threading exceptions we can move on to a much more powerful and general concept: Tasks!

 

The Task Parallel Library

In the .NET-Framework 4.0 a new library has been introduced: the Task Parallel Library (TPL). This has some powerful implications. The most notable for us is the new datatype named Task. Some people consider a Task being a nicely wrapped Thread, however, a Task is much more. A Task could be a running thread, however, a Task could also be all-we-need from a callback. In fact a Task does not say anything about the resource that’s being used. If a Thread is used, then in a much more reliable and performance improved way. The TPL handles an optimized thread-pool, which is specialized on creating and joining several threads within a short period of time.
So what is the TPL? It is a set of useful classes and methods for tasks and powerful (parallel) extensions to LINQ in form of PLINQ. The PLINQ part can be triggered by calling the AsParallel extension method before calling other LINQ extension methods. It should be noted, that PLINQ queries usually tend to run slower than their sequential counterparts, since most queries do not have enough computational time required to justify the overhead of creating a thread.
The following picture illustrates the placement of the TPL and attached possibilities.
The TPL sits on top of the CLR threadpool and brings us some useful new types and methods.
The TPL gives us very elegant methods of parallelizing computational challenging methods across different threads. For instance if we use Parallel.For() we can split loops in chunks, which are distributed among different cores. However, we need to be careful with race conditions and overhead due to creation and management of corresponding threads. Therefore the best case is obviously found in a loop with many iterations and a huge workload in the loop body, which is independent of other iterations.
Let’s see how the TPL would help us to parallelize a for-loop. We start with the sequential version:
int N = 10000000;
double sum = 0.0;
double step = 1.0 / N;

for(var i = 0; i < N; i++)
{
double x = (i + 0.5) * step;
sum
+= 4.0 / (1.0 + x * x);
}

return sum * step;
 
The simplest parallel version would be the following:
object _ = new object();
int N = 10000000;
double sum = 0.0;
double step = 1.0 / N;

Parallel.For(0, N, i =>
{
double x = (i + 0.5) * step;
double y = 4.0 / (1.0 + x * x);
lock(_)
{
sum
+= y;
}
});

return sum * step;
 
The reason for the requirement of a lock-block is the required synchronization. Therefore this very simple version is not really performant, since the synchronization overhead is much more than we gain by using multiple processors (the workload besides the synchronization is just too small). A better version uses another overload of the For method, which allows the creation of a thread-local variable.
object _ = new object();
int N = 10000000;
double sum = 0.0;
double step = 1.0 / N;

Parallel.For(0, N, () => 0.0, (i, state, local) =>
{
double x = (i + 0.5) * step;
return local + 4.0 / (1.0 + x * x);
}, local =>
{
lock (_)
{
sum
+= local;
}
});

return sum * step;
 
This does not look too different. There must be several questions:
  1. Why is this more efficient? Answer: Because we use the lock section only once per thread instead of once per iteration we actually drop a lot of the synchronization overhead.
  2. Why do we need to pass in another delegate as the third parameter? In this overload the third parameter is the delegate for creating the thread-local variable. In this case we are creating one double variable.
  3. Why cannot we just pass in the thread-local variable? If the variable would have already been created it would not be thread-local but global. We would pass in every thread the same variable.
  4. How should it be distinguished? The signature of the delegate for the body changed as well. 5.What is state and local? The state parameter gives us access to actions like breaking or stopping the loop execution (or realizing in what state we are), while the local variable is our access point to the thread-local variable (in this case just a double).
  5. What if I need more thread-local variables? How about creating an anonymous object or a instantiating a defined class? Since the TPL is no magic stick we still have to be aware of race conditions and shared resources. Nevertheless with the TPL a new set of (concurrent) types is introduced as well, which is quite helpful in dealing with such problems.
In the last part of this section we should also discuss the consequences of a Task type. Tasks have some very nice features, most notable:
  • Tasks provide a much cleaner access to the current state.
  • The cancellation is much smoother and well-defined.
  • Tasks can be connected, scheduled and synchronized.
  • Exceptions from tasks do not bubble up unless requested!
  • A Task does not have a return type, but a Task has return type T.
The last one is really important for us. If we start a new task that is computionally focused we might be interested in the result of the computation. While doing this with a Thread requires some work we get this behavior out-of-the-box with a Task.
Nowadays everything is centered around the Task class. We will now look closer at some of those properties.

 

Tasks and threads

As already mentioned there is a big difference between a Task and a Thread. While a Thread is something from the OS (a kind of resource), a Task is just some class. In a way we might say that a Task is a specialization of a Thread, however, this would not be true since not all running Task instances are based on a Thread. In fact all IO bound asynchronous methods in the .NET-Framework, which return a Task are not using a thread. They are all callback based, i.e. they are using some system notifications or already running threads from drivers or other processes.
Let’s recap what we learned about using threads:
  • The workload has to be big enough, i.e. at least as much instructions as time is required for creating and ending the thread (this is roughly about 100000 cycles or 1 ms, depending on architecture, system and OS).
  • Just running a problem on more cores does not equal more speed, i.e. if we want to write a huge file to the hard disk it does not make sense to do that with multiple threads, since the hardware might be already saturated with the amount of bytes that are being received from one core.
  • Always think about IO-bound vs CPU-bound. If the problem is CPU-bound then multiple threads might be a good idea. Otherwise we should look for a callback solution or (worst-case, but still much better than using the GUI thread) create only one thread.
  • Reducing the required communication to a minimum is essential when aiming for an improved performance when using multiple threads.
We can already see why using Task based solutions are preferable in general. Instead of providing two ways of solving things (either by creating a new thread, or by using a callback) we only need to provide one way of interacting with our code: the Task class. This is also the reason why the first asynchronous programming models of the .NET-Framework are being replaced by return corresponding Task instances. Now the actual resource (callback handler or a thread) does not matter anymore.
Let’s see how we can create a Task for computational purposes:
Task SimulationAsync()
{
//Create a new task with a lambda expression
var task = new Task(() =>
{
Random r = new Random();
double sum = 0.0;

for (int i = 0; i < 10000000; i++)
{
if (r.NextDouble() < 0.33)
sum
+= Math.Exp(-sum) + r.NextDouble();
else
sum
-= Math.Exp(-sum) + r.NextDouble();
}

return sum;
});

//Start it and return it
task
.Start();
return task;
}
 
There is no requirement, however, the usual convention is to return a so called hot task, i.e. we only want to return already running tasks. Now that we have this code running we could do some things:
var sim = SimulationAsync();
var res = sim.Result;//Blocks the current execution until the result is available
sim
.ContinueWith(task =>
{
//use task.Result here!
}, 
TaskScheduler.FromCurrentSynchronizationContext());  
//Continues with the given lambda from the current context
We could also spawn multiple simulations and use the one which finishes first:
var sim1 = SimulationAsync();
var sim2 = SimulationAsync();
var sim3 = SimulationAsync();

var firstTask = Task.WhenAny(sim1, sim2, sim3);//This creates another task! (callback)

firstTask
.ContinueWith(task =>
{
//use task.Result here, which will the the result of the first task that finished
//task is the first task that reached the end
}, TaskScheduler.FromCurrentSynchronizationContext());
Unfortunately not all features can be covered in this tutorial. One feature, however, we have to analyze a lot more is the possibility to continue a task. In principle such a continuation could solve all our problems.

 

Awaiting async methods

In the latest version of C#, called C# 5, two new keywords have been introduced: await and async. By using async we mark methods as being asynchronous, i.e. the result of the method will be packaged in a Task (if nothing is returned) or Task if the return type of the method would be T. So the following two methods,
void DoSomething()
{
}

int GiveMeSomething()
{
return 0;
}
would transform to
Task async DoSomethingAsync()
{
}

Task async GiveMeSomethingAsync()
{
return 0;
}
 
The end of the names has been changed as well, however, this is just a useful convention and not required. A very useful implication of transforming the inside of a method into a Task is that it could be continued with another Task instance. Now all our work here would be useless if there would not be another keyword, which solves this continuation step automatically. This keyword is called await. It can only be used inside async marked methods, since only those methods will be changed by the compiler. At this point it is important to emphasize again that a Task is not a Thread, i.e. we do not say anything here about spawning new threads or which resources to use.
The purpose is to write (what looks like) sequential code, which runs concurrently. Every async marked method is always entered from the current thread until the await statement triggers new code execution in what-could-be another thread (but does not have to be – see: IO operations). Whatever happens, the UI stays responsive in that time, since the rest of the method is already transformed to a continuation of this triggered Task. The scheme is as follows:
async Task ReadFromNetwork(string url)
{
//Create a HttpClient for webrequests
HttpClient client = new HttpClient();
//Do some UI (we are still in the GUI thread)
label1
.Text = "Requesting the data . . .";
var sw = Stopwatch.StartNew();
//Wait for the result with a continuation
var result = await client.GetAsync(url);
//Continue on the GUI thread no matter what thread has been used in the task
sw
.Stop();
label1
.Text = "Finished in " + sw.ElapsedMilliseconds + ".";
}
The big advantage is that the code reads very similar to a sequential code, while being responsive and running concurrently. Once the task is finished the method is resuming (usually we might want to do some UI related modifications in those sections).
We can also express this scheme in a picture (who knew!):
Using await / async to toggle between UI bound code and running tasks.
This alone is already quite handy, but it gets much better. Until this point there is nothing that we could have solved easily with a few more characters. So wait for this: What if we still want to use trycatch for handling exceptions? Using the ContinueWith method does not work quite well in such a scenario. We would have to use a different kind of pattern to catch exceptions. Of course this would be more complicated and it would also result in more lines of code.
async Task ReadFromNetwork(string url)
{
/* Same stuff as before */
try
{
await client
.GetAsync(url);
}
catch
{
label1
.Text = "Request failed!";
return;
}
finally
{
sw
.Stop();
}

label1
.Text = "Finished in " + sw.ElapsedMilliseconds + ".";
}
 
This all works and is so close to sequential code that no one should still have excuses for non-responsive applications. Even old legacy methods are quite easy to wrap in a Task. Let’s say one has a computationally expensive method that would otherwise run in a Thread, however, it is too complicated. Now we could do the following:
Task WrappedLegacyMethod()
{
return Task.Run(MyOldLegacyMethod);
}
 
This is called async-over-sync. The opposite is also possible of course, i.e. sync-over-async. Here we just have to omit the await and call the Result property, as we’ve seen before.
What are the “keep-in-mind” things when thinking about awaiting tasks?
  1. async void should only be used with event handlers. Not returning a Task is an anti-pattern, that has only been made possible to allow the usage of await in event handlers, where the signature has been fixed. Returning void is a fire-and-forget mechanism, that does not trigger exceptions in trycatch-blocks and will usually result in faulty behavior.
  2. Using sync-over-async with an async function that switches to the UI will result in a deadlock, i.e. the UI is dead. This point is mostly important for people who want to develop APIs using async marked methods. Since the API will be UI-independent the context switch is unnecessary and a potential risk-factor. Avoid it by calling ConfigureAwait(false) on the awaited task.
  3. Spawning too many (parallel) tasks will result in an vast overhead.
  4. When using lambda expressions better check if you are actually returning a Task.
So what’s the take-away from this section? C# makes programming robust (asynchronous) responsive applications nearly as easy as programming classic sequential (mostly non-responsive) applications.

 

Outlook

This concludes the third part of this tutorial series. In the next part we will have a look at powerful, yet lesser known (or used) features of C#. We will have a look on how to easily construct IEnumerable instances and what co- and contra-variance means and how to use it. Additionally we will look closely at attributes and interop between native code (e.g. C or C++) and managed code in form of C#.
Another focus in the next tutorial will be on more efficient code, as well as cleaner code, e.g. using elegant compiler-based attributes for getting information on the source.

 

References

C# WPF Tutorial – Implementing IScrollInfo [Advanced]

The ScrollViewerin WPF is pretty handy (and quite flexible) – especially when compared to what you had to work with in WinForms (ScrollableControl). 98% of the time, I can make the ScrollViewer do what I need it to for the given situation. Those other 2 percent, though, can get kind of hairy. Fortunately, WPF provides the IScrollInfointerface – which is what we will be talking about today.
So what is IScrollInfo? Well, it is a way to take over the logic behind scrolling, while still maintaining the look and feel of the standard ScrollViewer. Now, first off, why in the world would we want to do that? To answer that question, I’m going to take a an example from a tutorial that is over a year old now – Creating a Custom Panel Control. In that tutorial, we created our own custom WPF panel (that animated!). One of the issues with that panel though (and the WPF WrapPanelin general) is that you have to disable the horizontal scrollbar if you put the panel in a ScrollViewer.
If you don’t, you go from something that looks like this:
Wrap Panel with no horizontal scrollbar.
To something that looks like this:
Wrap Panel with horizontal scrollbar.
And that kind of really defeats the purpose of a wrap panel.
The problem with disabling the horizontal scroll bar altogether is a situation like this:
Wrap Panel with item bigger than width of panel.
In that case, you would really like a horizontal scroll bar to be there, but not change the wrapping behavior. And to get that behavior, you have to write your own custom scroll logic using IScrollInfo.
Ok, time to dive into the code. First, let’s take a look at what methods IScrollInfo requires us to implement:
public class AnimatedWrapPanel : IScrollInfo
{
public void LineDown(){ }

public void LineLeft(){ }

public void LineRight(){ }

public void LineUp(){ }

public void MouseWheelDown() { }

public void MouseWheelLeft() { }

public void MouseWheelRight() { }

public void MouseWheelUp() { }

public void PageDown() { }

public void PageLeft() { }

public void PageRight() { }

public void PageUp() { }

public ScrollViewer ScrollOwner { get; set; }

public bool CanHorizontallyScroll { get; set; }

public bool CanVerticallyScroll { get; set; }

public double ExtentHeight { get; }

public double ExtentWidth { get; }

public double HorizontalOffset { get; }

public double VerticalOffset { get; }

public double ViewportHeight { get; }

public double ViewportWidth { get; }

public Rect MakeVisible(Visual visual, Rect rectangle)
{ }

public void SetHorizontalOffset(double offset)
{ }

public void SetVerticalOffset(double offset)
{ }
}
 
Wow! Thats quite a lot of stuff there. But don’t worry – almost all of it is your basic simple fill in the blank. For instance, take all the ‘Up’, ‘Down’, ‘Left’, ‘Right’ methods. Those methods give you fine grained control over how much your panel will scroll when the user clicks the up/down buttons on the scroll bar, or scrolls their mouse wheel. But for our purposes, they can be filled in pretty easily:
public class AnimatedWrapPanel : IScrollInfo
{
private const double LineSize = 16;
private const double WheelSize = 3 * LineSize;

public void LineDown()
{ SetVerticalOffset(VerticalOffset + LineSize); }

public void LineUp()
{ SetVerticalOffset(VerticalOffset - LineSize); }

public void LineLeft()
{ SetHorizontalOffset(HorizontalOffset - LineSize); }

public void LineRight()
{ SetHorizontalOffset(HorizontalOffset + LineSize); }

public void MouseWheelDown()
{ SetVerticalOffset(VerticalOffset + WheelSize); }

public void MouseWheelUp()
{ SetVerticalOffset(VerticalOffset - WheelSize); }

public void MouseWheelLeft()
{ SetHorizontalOffset(HorizontalOffset - WheelSize); }

public void MouseWheelRight()
{ SetHorizontalOffset(HorizontalOffset + WheelSize); }

public void PageDown()
{ SetVerticalOffset(VerticalOffset + ViewportHeight); }

public void PageUp()
{ SetVerticalOffset(VerticalOffset - ViewportHeight); }

public void PageLeft()
{ SetHorizontalOffset(HorizontalOffset - ViewportWidth); }

public void PageRight()
{ SetHorizontalOffset(HorizontalOffset + ViewportWidth); }

public ScrollViewer ScrollOwner { get; set; }

public bool CanHorizontallyScroll { get; set; }

public bool CanVerticallyScroll { get; set; }

public double ExtentHeight { get; }

public double ExtentWidth { get; }

public double HorizontalOffset { get; }

public double VerticalOffset { get; }

public double ViewportHeight { get; }

public double ViewportWidth { get; }

public Rect MakeVisible(Visual visual, Rect rectangle)
{ }

public void SetHorizontalOffset(double offset)
{ }

public void SetVerticalOffset(double offset)
{ }
}
 
Just set up some constants for the amount to scroll per line and per wheel click, and away we go! Thats over half the methods down already. Now let’s take care of some of those pesky properties:
public class AnimatedWrapPanel : IScrollInfo
{
private const double LineSize = 16;
private const double WheelSize = 3 * LineSize;

private bool _CanHorizontallyScroll;
private bool _CanVerticallyScroll;
private ScrollViewer _ScrollOwner;
private Vector _Offset;
private Size _Extent;
private Size _Viewport;

public void LineDown()
{ SetVerticalOffset(VerticalOffset + LineSize); }

public void LineUp()
{ SetVerticalOffset(VerticalOffset - LineSize); }

public void LineLeft()
{ SetHorizontalOffset(HorizontalOffset - LineSize); }

public void LineRight()
{ SetHorizontalOffset(HorizontalOffset + LineSize); }

public void MouseWheelDown()
{ SetVerticalOffset(VerticalOffset + WheelSize); }

public void MouseWheelUp()
{ SetVerticalOffset(VerticalOffset - WheelSize); }

public void MouseWheelLeft()
{ SetHorizontalOffset(HorizontalOffset - WheelSize); }

public void MouseWheelRight()
{ SetHorizontalOffset(HorizontalOffset + WheelSize); }

public void PageDown()
{ SetVerticalOffset(VerticalOffset + ViewportHeight); }

public void PageUp()
{ SetVerticalOffset(VerticalOffset - ViewportHeight); }

public void PageLeft()
{ SetHorizontalOffset(HorizontalOffset - ViewportWidth); }

public void PageRight()
{ SetHorizontalOffset(HorizontalOffset + ViewportWidth); }

public ScrollViewer ScrollOwner
{
get { return _ScrollOwner; }
set { _ScrollOwner = value; }
}

public bool CanHorizontallyScroll
{
get { return _CanHorizontallyScroll; }
set { _CanHorizontallyScroll = value; }
}

public bool CanVerticallyScroll
{
get { return _CanVerticallyScroll; }
set { _CanVerticallyScroll = value; }
}

public double ExtentHeight
{ get { return _Extent.Height; } }

public double ExtentWidth
{ get { return _Extent.Width; } }

public double HorizontalOffset
{ get { return _Offset.X; } }

public double VerticalOffset
{ get { return _Offset.Y; } }

public double ViewportHeight
{ get { return _Viewport.Height; } }

public double ViewportWidth
{ get { return _Viewport.Width; } }

public Rect MakeVisible(Visual visual, Rect rectangle)
{ }

public void SetHorizontalOffset(double offset)
{ }

public void SetVerticalOffset(double offset)
{ }
}
 
Pretty much we just needed to set up backing fields for all those properties. The property names are pretty self explanatory – “Extent” is the total size of the panel, while “Viewport” is the amount that is visible on screen. “Offset” is the amount that the viewport is offset from 0,0 – i.e., how far scrolled down/right we are.
What is left are the more complicated parts of the interface. First, let’s fill out the SetHorizontalOffset and SetVerticalOffset calls:
public void SetHorizontalOffset(double offset)
{
offset
= Math.Max(0, Math.Min(offset, ExtentWidth - ViewportWidth));
if (offset != _Offset.Y)
{
_Offset.X = offset;
InvalidateArrange();
}
}

public void SetVerticalOffset(double offset)
{
offset
= Math.Max(0, Math.Min(offset, ExtentHeight - ViewportHeight));
if (offset != _Offset.Y)
{
_Offset.Y = offset;
InvalidateArrange();
}
}
 
In both cases, we force the offset into a valid range, and then if it is different than the current offset, we set it as the new offset and invalidate the arrange of the panel (so that the items on the panel will get moved appropriately).
What’s left on the interface is MakeVisible, which is what gets called to scroll an item into view. The code in there is just a bunch of math to calculate new scroll offsets – I’m not going to walk through it, but you can check it out in the full code farther down the tutorial.
So the interface is fully implemented. But sadly, that isn’t enough – we still have to do things like calculate the Viewport and the Extent, as well as modify the MeasureOverride and ArrangeOverride to deal with their own scroll behavior.
If you take a look at the code in Creating a Custom Panel Control, the following code might look very familiar. This is because I tried to modify the code for the original animated wrap panel as little as possible – see if you can spot the changes:
protected override Size MeasureOverride(Size availableSize)
{
double curX = 0, curY = 0, curLineHeight = 0, maxLineWidth = 0;
foreach (UIElement child in Children)
{
child
.Measure(InfiniteSize);

if (curX + child.DesiredSize.Width > availableSize.Width)
{ //Wrap to next line
curY
+= curLineHeight;
curX
= 0;
curLineHeight
= 0;
}

curX
+= child.DesiredSize.Width;
if (child.DesiredSize.Height > curLineHeight)
{ curLineHeight = child.DesiredSize.Height; }

if (curX > maxLineWidth)
{ maxLineWidth = curX; }
}

curY
+= curLineHeight;

VerifyScrollData(availableSize, new Size(maxLineWidth, curY));

return _Viewport;
}

protected override Size ArrangeOverride(Size finalSize)
{
if (this.Children == null || this.Children.Count == 0)
{ return finalSize; }

TranslateTransform trans = null;
double curX = 0, curY = 0, curLineHeight = 0, maxLineWidth = 0;

foreach (UIElement child in Children)
{
trans
= child.RenderTransform as TranslateTransform;
if (trans == null)
{
child
.RenderTransformOrigin = new Point(0, 0);
trans
= new TranslateTransform();
child
.RenderTransform = trans;
}

if (curX + child.DesiredSize.Width > finalSize.Width)
{ //Wrap to next line
curY
+= curLineHeight;
curX
= 0;
curLineHeight
= 0;
}

child
.Arrange(new Rect(0, 0,
child
.DesiredSize.Width, child.DesiredSize.Height));

trans
.BeginAnimation(TranslateTransform.XProperty,
new DoubleAnimation(curX - HorizontalOffset, _AnimationLength),
HandoffBehavior.Compose);
trans
.BeginAnimation(TranslateTransform.YProperty,
new DoubleAnimation(curY - VerticalOffset, _AnimationLength),
HandoffBehavior.Compose);

curX
+= child.DesiredSize.Width;
if (child.DesiredSize.Height > curLineHeight)
{ curLineHeight = child.DesiredSize.Height; }

if (curX > maxLineWidth)
{ maxLineWidth = curX; }
}

curY
+= curLineHeight;
VerifyScrollData(finalSize, new Size(maxLineWidth, curY));

return finalSize;
}
 
MeasureOverride is almost identical to the old code, except for two things. One, we keep track of the max row width, to correctly calculate the horizontal extent of the panel. Two, we have a call to VerifyScrollData at the end of the method – a method we have not seen yet (but will be taking a look at soon).
ArrangeOverride has a couple more changes. Again, we are keeping track of the max row width, and calling VerifyScrollData. But we are also modifying the positions at which the items are placed by the amount of the scroll offset. This is because since we are in charge of scrolling behavior, we are also in charge of making sure items are place correctly according to the scrolling behavior.
Ok, now for that VerifyScrollData method:
protected void VerifyScrollData(Size viewport, Size extent)
{
if (double.IsInfinity(viewport.Width))
{ viewport.Width = extent.Width; }

if (double.IsInfinity(viewport.Height))
{ viewport.Height = extent.Height; }

_Extent = extent;
_Viewport = viewport;

_Offset.X = Math.Max(0,
Math.Min(_Offset.X, ExtentWidth - ViewportWidth));
_Offset.Y = Math.Max(0,
Math.Min(_Offset.Y, ExtentHeight - ViewportHeight));

if (ScrollOwner != null)
{ ScrollOwner.InvalidateScrollInfo(); }
}
 
It is this function that sets the viewport and extent fields. It also coerces the offsets to be within the correct ranges (changes to the extent/viewport can make a previously valid offset incorrect). Finally, if there is a scroll owner currently attached, we call InvalidateScrollInfo. This makes sure that the scrolviewer is displaying the right ranges and positions for the scrollbars.
And that is it for implementing your own implementation of IScrollInfo. Here is all the code together:
using System;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Controls.Primitives;
using System.Windows.Media;
using System.Windows.Media.Animation;

namespace AnimatedWrapPanel
{
public class AnimatedWrapPanel : Panel, IScrollInfo
{
private static Size InfiniteSize =
new Size(double.PositiveInfinity, double.PositiveInfinity);
private const double LineSize = 16;
private const double WheelSize = 3 * LineSize;

private bool _CanHorizontallyScroll;
private bool _CanVerticallyScroll;
private ScrollViewer _ScrollOwner;
private Vector _Offset;
private Size _Extent;
private Size _Viewport;

private TimeSpan _AnimationLength = TimeSpan.FromMilliseconds(200);

protected override Size MeasureOverride(Size availableSize)
{
double curX = 0, curY = 0, curLineHeight = 0, maxLineWidth = 0;
foreach (UIElement child in Children)
{
child
.Measure(InfiniteSize);

if (curX + child.DesiredSize.Width > availableSize.Width)
{ //Wrap to next line
curY
+= curLineHeight;
curX
= 0;
curLineHeight
= 0;
}

curX
+= child.DesiredSize.Width;
if (child.DesiredSize.Height > curLineHeight)
{ curLineHeight = child.DesiredSize.Height; }

if (curX > maxLineWidth)
{ maxLineWidth = curX; }
}

curY
+= curLineHeight;

VerifyScrollData(availableSize, new Size(maxLineWidth, curY));

return _Viewport;
}

protected override Size ArrangeOverride(Size finalSize)
{
if (this.Children == null || this.Children.Count == 0)
{ return finalSize; }

TranslateTransform trans = null;
double curX = 0, curY = 0, curLineHeight = 0, maxLineWidth = 0;

foreach (UIElement child in Children)
{
trans
= child.RenderTransform as TranslateTransform;
if (trans == null)
{
child
.RenderTransformOrigin = new Point(0, 0);
trans
= new TranslateTransform();
child
.RenderTransform = trans;
}

if (curX + child.DesiredSize.Width > finalSize.Width)
{ //Wrap to next line
curY
+= curLineHeight;
curX
= 0;
curLineHeight
= 0;
}

child
.Arrange(new Rect(0, 0,
child
.DesiredSize.Width, child.DesiredSize.Height));

trans
.BeginAnimation(TranslateTransform.XProperty,
new DoubleAnimation(curX - HorizontalOffset, _AnimationLength),
HandoffBehavior.Compose);
trans
.BeginAnimation(TranslateTransform.YProperty,
new DoubleAnimation(curY - VerticalOffset, _AnimationLength),
HandoffBehavior.Compose);

curX
+= child.DesiredSize.Width;
if (child.DesiredSize.Height > curLineHeight)
{ curLineHeight = child.DesiredSize.Height; }

if (curX > maxLineWidth)
{ maxLineWidth = curX; }
}

curY
+= curLineHeight;
VerifyScrollData(finalSize, new Size(maxLineWidth, curY));

return finalSize;
}

#region Movement Methods
public void LineDown()
{ SetVerticalOffset(VerticalOffset + LineSize); }

public void LineUp()
{ SetVerticalOffset(VerticalOffset - LineSize); }

public void LineLeft()
{ SetHorizontalOffset(HorizontalOffset - LineSize); }

public void LineRight()
{ SetHorizontalOffset(HorizontalOffset + LineSize); }

public void MouseWheelDown()
{ SetVerticalOffset(VerticalOffset + WheelSize); }

public void MouseWheelUp()
{ SetVerticalOffset(VerticalOffset - WheelSize); }

public void MouseWheelLeft()
{ SetHorizontalOffset(HorizontalOffset - WheelSize); }

public void MouseWheelRight()
{ SetHorizontalOffset(HorizontalOffset + WheelSize); }

public void PageDown()
{ SetVerticalOffset(VerticalOffset + ViewportHeight); }

public void PageUp()
{ SetVerticalOffset(VerticalOffset - ViewportHeight); }

public void PageLeft()
{ SetHorizontalOffset(HorizontalOffset - ViewportWidth); }

public void PageRight()
{ SetHorizontalOffset(HorizontalOffset + ViewportWidth); }
#endregion

public ScrollViewer ScrollOwner
{
get { return _ScrollOwner; }
set { _ScrollOwner = value; }
}

public bool CanHorizontallyScroll
{
get { return _CanHorizontallyScroll; }
set { _CanHorizontallyScroll = value; }
}

public bool CanVerticallyScroll
{
get { return _CanVerticallyScroll; }
set { _CanVerticallyScroll = value; }
}

public double ExtentHeight
{ get { return _Extent.Height; } }

public double ExtentWidth
{ get { return _Extent.Width; } }

public double HorizontalOffset
{ get { return _Offset.X; } }

public double VerticalOffset
{ get { return _Offset.Y; } }

public double ViewportHeight
{ get { return _Viewport.Height; } }

public double ViewportWidth
{ get { return _Viewport.Width; } }

public Rect MakeVisible(Visual visual, Rect rectangle)
{
if (rectangle.IsEmpty || visual == null
|| visual == this || !base.IsAncestorOf(visual))
{ return Rect.Empty; }

rectangle
= visual.TransformToAncestor(this).TransformBounds(rectangle);

Rect viewRect = new Rect(HorizontalOffset,
VerticalOffset, ViewportWidth, ViewportHeight);
rectangle
.X += viewRect.X;
rectangle
.Y += viewRect.Y;
viewRect
.X = CalculateNewScrollOffset(viewRect.Left,
viewRect
.Right, rectangle.Left, rectangle.Right);
viewRect
.Y = CalculateNewScrollOffset(viewRect.Top,
viewRect
.Bottom, rectangle.Top, rectangle.Bottom);
SetHorizontalOffset(viewRect.X);
SetVerticalOffset(viewRect.Y);
rectangle
.Intersect(viewRect);
rectangle
.X -= viewRect.X;
rectangle
.Y -= viewRect.Y;

return rectangle;
}

private static double CalculateNewScrollOffset(double topView,
double bottomView, double topChild, double bottomChild)
{
bool offBottom = topChild < topView && bottomChild < bottomView;
bool offTop = bottomChild > bottomView && topChild > topView;
bool tooLarge = (bottomChild - topChild) > (bottomView - topView);

if (!offBottom && !offTop)
{ return topView; } //Don't do anything, already in view

if ((offBottom && !tooLarge) || (offTop && tooLarge))
{ return topChild; }

return (bottomChild - (bottomView - topView));
}

protected void VerifyScrollData(Size viewport, Size extent)
{
if (double.IsInfinity(viewport.Width))
{ viewport.Width = extent.Width; }

if (double.IsInfinity(viewport.Height))
{ viewport.Height = extent.Height; }

_Extent = extent;
_Viewport = viewport;

_Offset.X = Math.Max(0,
Math.Min(_Offset.X, ExtentWidth - ViewportWidth));
_Offset.Y = Math.Max(0,
Math.Min(_Offset.Y, ExtentHeight - ViewportHeight));

if (ScrollOwner != null)
{ ScrollOwner.InvalidateScrollInfo(); }
}

public void SetHorizontalOffset(double offset)
{
offset
= Math.Max(0,
Math.Min(offset, ExtentWidth - ViewportWidth));
if (offset != _Offset.Y)
{
_Offset.X = offset;
InvalidateArrange();
}
}

public void SetVerticalOffset(double offset)
{
offset
= Math.Max(0,
Math.Min(offset, ExtentHeight - ViewportHeight));
if (offset != _Offset.Y)
{
_Offset.Y = offset;
InvalidateArrange();
}
}
}
}
 
Now, in order to use a class that implements IScrollInfo, you do have to do one other thing – you have to remember to set the property CanContentScrollto true on the ScrollViewer surrounding the instance of your class. This signifies to the ScrollViewer that the content can control its own scroll behavior – letting the logic that you have written work its magic. So with that property set, your XAML might look something like this:
<Window x:Class="AnimatedWrapPanel.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:ARP="clr-namespace:AnimatedWrapPanel"
Title="Animated Wrap Panel Test" Height="300" Width="300">
<ScrollViewer CanContentScroll="True"
HorizontalScrollBarVisibility="Auto"
VerticalScrollBarVisibility="Auto">

<Image Source="Images\Aquarium.jpg" Stretch="Uniform"
Width="100" Margin="5"/>
<Image Source="Images\Ascent.jpg" Stretch="Uniform"
Width="50" Margin="5" />
<Image Source="Images\Autumn.jpg" Stretch="Uniform"
Width="200" Margin="5"/>
<Image Source="Images\Crystal.jpg" Stretch="Uniform"
Width="75" Margin="5"/>
<Image Source="Images\DaVinci.jpg" Stretch="Uniform"
Width="125" Margin="5"/>
<Image Source="Images\Follow.jpg" Stretch="Uniform"
Width="100" Margin="5"/>
<Image Source="Images\Friend.jpg" Stretch="Uniform"
Width="50" Margin="5"/>
<Image Source="Images\Home.jpg" Stretch="Uniform"
Width="150" Margin="5"/>
<Image Source="Images\Moon flower.jpg" Stretch="Uniform"
Width="100" Margin="5"/>


 
And this leaves you a panel that can scroll horizontally, but only when it needs to:
Resulting panel screenshot
Well, that about wraps it up. As always, you can grab the source for the example below if you want to play with the code on your own.
Source Files:

C# Tutorial – Poking at Event Contents [Advanced]

Events in C# always feel like there is a little touch of black magic in the background keeping things running smoothly. We have had tutorials here before on events in C# – we took a look at how to create your own custom events in C# Snippet Tutorial – Custom EventHandlers, and we looked at the syntactic sugar behind the += and -= operators for events in C# Tutorial – EventAccessors. But we have never taken a look at what actually happens when you declare an event, and what happens when you invoke it.
The key behind events in C# in the MulticastDelegate. You’ve probably seen delegates before (if you haven’t, they are just a reference to a method), and knowing that, you can probably wager a guess as to what a MulticastDelegate is. A MulticastDelegate is essentially a list of method references that acts like a regular old delegate in many ways. For instance, take a look at the following:
private void DoIt()
{
var del = new Action(Method1);
del();

var del2 = Delegate.Combine(del, new Action(Method2)) as Action;
del2
();
}

private void Method1()
{ Console.WriteLine("I'm Method 1!"); }

private void Method2()
{ Console.WriteLine("I'm Method 2!"); }
 
If you run the function DoIt, the output is:
I'm Method 1!
I'
m Method 1!
I
'm Method 2!
 
This is because when del1 is invoked, it just calls Method1. But when del2 is invoked, both Method1 and Method2 get called, because the invocation list for del2 contains both methods. Both methods are in there because of the Combine call.
Once you know about the existence of the invocation list, you can start to do some interesting things. Take a look at the example below:
private void Method1()
{ Console.WriteLine("I'm Method 1!"); }

private void Method2()
{ Console.WriteLine("I'm Method 2!"); }

private void Method3()
{ Console.WriteLine("I'm Method 3!"); }

private void Method4()
{ Console.WriteLine("I'm Method 4!"); }

private void DoItAgain()
{
var del = Delegate.Combine(new Action(Method1), new Action(Method2),
new Action(Method3), new Action(Method4));

var list = del.GetInvocationList();
for (int i = 0; i < list.Length; i+=2)
{ ((Action)list[i])(); }
}
 
The output of the call to DoItAgain is the following:
I'm Method 1!
I'
m Method 3!
 
In this case, we are skipping every other entry in the invocation list. Probably not a terribly useful thing to do, but you can see the power that being able to get to the list can give you.
Ok, so enough about the specifics of MulticastDelegates. How exactly are they used in events? Well, every time you write something like this in code:
public event EventHandler MyBestEventEver;
 
The compiler is going in behind you and placing down a MulticastDelegate in the background. Now, this only happens if you aren’t declaring your own implementations of the add and remove accessors – when you do that (as we covered in C# Tutorial – EventAccessors), you are on your own for how the invocation list will actually be stored. But when you don’t declare your own accessors, the compiler automatically uses a MulticastDelegate.
Now, when you are in the same class as where your event is declared, you can get to all the handy MulticastDelegate methods, like GetInvocationList:
public class MyEventTest
{
public event EventHandler Changed;

public void GetChangedHookCount()
{
var myList = Changed.GetInvocationList();
Console.WriteLine(myList.Length);
}
}
 
But if you are in a different class, the compiler says no:
public class MyEventTest
{
public event EventHandler Changed;
}

public class MyOtherClass
{
public void GetChangedHookCount()
{
MyEventTest test = new MyEventTest();
var myList = test.Changed.GetInvocationList();
Console.WriteLine(myList.Length);
}
}

//Error: The event 'MyEventTest.Changed' can only appear on the left hand side
//of += or -= (except when used from within the type 'MyEventTest')
 
This is because from outside MyEventTest the compiler can’t make any guarantees about how that event is implemented inside MyEventTest – maybe something crazy was done with the accessors? Maybe there is no MulticastDelegate at all?
This is a real bummer – because it means that it is really hard to get to and/or modify the contents of an event from outside the class it was declared in. Now, granted, from a security and good practices point of view, this is a very good thing – but every once in a while, it would come in really handy.
But don’t give up yet! If you know that the event is not using custom accessors (and most events don’t), you can still use some reflection to get to these pieces:
public static class EventUtilities
{
public static Delegate[] GetInvocationList(string eventName, object obj)
{
bool success;
var result = TryGetInvocationList(eventName, obj, out success);
if (success)
{ return result; }
else
{ throw new InvalidOperationException(); }
}

public static Delegate[] TryGetInvocationList(string eventName, object obj,
out bool success)
{
success
= false;

if (obj == null)
{ throw new ArgumentNullException("obj"); }

if (eventName == null)
{ throw new ArgumentNullException("eventName"); }

var field = GetField(eventName, obj.GetType());
if (field == null)
{ return null; }

success
= true;
var mDel = field.GetValue(obj) as MulticastDelegate;

if (mDel == null)
{ return null; }
else
{ return mDel.GetInvocationList(); }
}

public static bool ClearInvocationList(string eventName, object obj)
{
if (obj == null)
{ throw new ArgumentNullException("obj"); }

if (eventName == null)
{ throw new ArgumentNullException("eventName"); }

var field = GetField(eventName, obj.GetType());

if (field == null)
{ return false; }

field
.SetValue(obj, null);
return true;
}

private static FieldInfo GetField(string eventName, Type type)
{
var field = type.GetField(eventName, BindingFlags.Instance |
BindingFlags.NonPublic | BindingFlags.FlattenHierarchy | BindingFlags.Public);

if (field == null)
{ return null; }

if (field.FieldType == typeof(MulticastDelegate))
{ return field; }

if(field.FieldType.IsSubclassOf(typeof(MulticastDelegate)))
{ return field; }

return null;
}
}
 
So this crazy code is all about pulling that compiler created MulticastDelegate for an event out into the open where we can mess with it. Given the name of an event and the object that it resides on, we can pull the MulticastDelegate field off of the type using reflection (that is what the GetField method is doing) and then call GetValue to actually pull out the instance of the MulticastDelegate. For more information on how reflection works, you can check out these two tutorials.
An interesting thing to note is that the backing MulticastDelegatefield is null when there is nothing hooked to the event – which means that clearing all hooks to an event is as simple as setting that field to null (which is what the ClearInvocationList is doing).
So how do we use these crazy methods? Its pretty simple – lets take the example above that wouldn’t compile and fix it up:
public class MyEventTest
{
public event EventHandler Changed;
}

public class MyOtherClass
{
public void GetChangedHookCount()
{
MyEventTest test = new MyEventTest();
var myList = EventUtilities.GetInvocationList("Changed", test);
Console.WriteLine(myList.Length);
}
}
 
In this case, this would print out 0, since nothing has been attached to the event.
One huge caveat to end this tutorial – these methods will not work for poking at the contents of events on pretty much any WPF element. This is because almost every event on a WPF element implements its own special add/remove accessors – they almost never use the standard MulticastDelegate backing. WPF elements use their own special internal EventHandlersStore, which while in the end still holds MulticastDelegates, is much harder to get to. If you need to get at the contents of a WPF event (and I wouldn’t do this unless you really, really need to), I suggest pulling open Reflector to figure out exactly what to poke and prod at using reflection.
That’s it for this tutorial on poking at events and MulticastDelegates. I hope it shed some light on what is a mysterious black box to many .NET developers. As always, drop any questions you might have below, and I’ll do my best to answer them.

C# Tutorial – Object Finalizers [Advanced]

Recently, in a tutorial about WeakReferences in C#, we talked a bit about garbage collection and how the garbage collector works in .NET. I figured since we already started addressing that stuff, there is no reason not to delve deeper. And so, today we are going to take a look at how object finalizers work in C#.
What is an object finalizer? I’m glad you asked! They are essentially the cleanup functions for classes – when an object is collected by the garbage collector in .NET, the finalizer gets run, hopefully cleaning up any unmanaged resources that object may have been holding on to (file references, window handles, network sockets, etc…). An object finalizer is in may ways similar to the C++ destructor, but unlike in C++, a programmer can never call a finalizer directly (in C++, there is the delete operator, and .NET has no such equivalent).
So, first, lets take a look at how to write a finalizer, and then we can delve into the details on when they are run and other caveats.
class ClassWithFinalizer
{
System.Timers.Timer _SillyTimer;

public ClassWithFinalizer()
{
_SillyTimer = new System.Timers.Timer(100);
_SillyTimer.Elapsed +=
(a, b) => Console.WriteLine("Still Alive!");
_SillyTimer.Start();
}

//Finalizer
~ClassWithFinalizer()
{
_SillyTimer.Stop();
_SillyTimer.Dispose();
Console.WriteLine("You Killed Me!!");
}
}
 
The class in the code block above has a finalizer, and probably by looking at the code you have already figured out the syntax for writing your own. To write a finalizer method, all you do is create a method with the same name as the class (kind of like how you declare a constructor) and you prefix it with a “\~”. The method takes no arguments, and it does not use the public/private scoping keywords (because a finalizer never gets called explicitly in code anyway).
So what is the above class doing, anyway? Well, its kind of silly, but it shows off the finalizer pretty well. In the constructor we create a timer, and set it so that every 100 milliseconds it prints out the statement “Still Alive!”. So when we create this class, “Still Alive!” should print to the console window until the program closes….or at least that is what it would do if there wasn’t a finalizer.
When this object gets garbage collected, it stops and disposes the timer, and prints out the final message “You Killed Me!!”. Below is some code that causes this behavior to happen, and the corresponding output:
static void Main(string[] args)
{
new ClassWithFinalizer();
System.Threading.Thread.Sleep(500);
GC
.Collect();
}

Still Alive!
Still Alive!
Still Alive!
Still Alive!
You Killed Me!!
 
At the start of the main method, we create an instance of ClassWithFinalizer, but we don’t assign the resulting reference to anything. That means that we created the object, but no one is referencing it, so at any point the garbage collector can come along and destroy it. We then sleep the main thread for a bit, possibly letting the instance of ClassWithFinalizer print out “Still Alive” a few times, and then we force a garbage collection by calling GC.Collect(). The garbage collection notices that no one is referencing the instance of ClassWithFinalizer, and so collects it, and in the process executes the finalizer, killing the timer, and printing out the final message of “You Killed Me!!”
What if we didn’t have the explicit call to GC.Collect(), and the program just sat there? Well, lets take that line out (and add a Console.Read() to cause the program to sit there):
static void Main(string[] args)
{
new ClassWithFinalizer();
Console.Read();
}

Still Alive!
Still Alive!
Still Alive!
Still Alive!
Still Alive!
Still Alive!
Still Alive!
Still Alive!
Still Alive!
Still Alive!
...
...
...
You Killed Me!!
 
Who knows how long it would be till the garbage collector finally tried to collect that instance of ClassWithFinalizer? It is quite possible it wouldn’t happen until the program closed.
So does that all make sense? Good, cause it is about to become less clear. One of the main caveats of finalizers in .NET is that there are no guarantees about when the finalizer for an object will actually get run. Unlike in C++, where the destructor gets called as soon as an object goes out of scope (and if that’s not enough the delete operator can trigger the destructor explicitly), finalizers in C# are not deterministic. A C# finalizer will be called at some point between when the object is last used and the ending of the program – and you as a programmer don’t know any more than that.
Hey, and if you look even deeper, it gets yet more complicated – finalizers are not run immediately when the garbage collector gets around to realizing an object can be collected. The garbage collector sees that the object has a finalizer and so adds it to the finalization queue (or f-reachable queue). Eventually, the finalizer method gets run, and then when the garabge collector realizes that, it finally frees the object’s memory. So using finalizers can actually delay the real collection on an object for some number of garbage collection cycles.
In a simple example like the one above, everything works as expected. But when you get into bigger programs, this no-deterministic method of finalization can get in the way. The creators of .NET realized this, and so created the Disposable pattern and the using statement, which gives programmers the ability to do much more deterministic object disposal. That, however, is a discussion long enough for another tutorial all on its own.

C# Tutorial – Weak References [Advanced]

We all know (hopefully) that C# is a garbage-collected language. In general, what this means is that we as programmers don’t need to free our own memory – the garbage collector will free that memory for us once it is no longer being referenced. Now, of course, garbage collection is a lot more complicated than that, and writing a good garbage collector is actually a relatively hard problem. And the fact that writing a perfect garbage collector is probably impossible is the reason why things like C#’s Weak Reference object exist.
Generally, when you talk about a reference to an object in .NET (and in most other garbage collected languages), you are talking about a “strong” reference. As long as that reference exists, the garbage collector won’t collect the object behind that reference. A weak reference is a special type of reference where the garbage collector can still collect the underlying object, even though you still technically have a reference to it.
The key here is to remember that the garbage collector is not running all the time. As far as we, the programmers of an application, know it is completely random and could kick in at any time. This means that an object only referenced through a weak reference could sit around for long time, or for virtually no time at all (and really, it is even more complicated than trying to figure out the next time the garbage collector will run – because the garbage collector for C# is generational). And, as soon as you copy the reference out of the weak reference variable into a regular reference, the underlying object will no longer be collected (assuming that it hadn’t already been collected), because now you have a strong reference to it.
Ok, enough with this theoretical talk. Let’s get down to some code, and hopefully we can show how this weak reference object is actually useful.
public string _FilePath = "PathToMyImportantFile.dat";
public WeakReference _FileWeakRef = new WeakReference(null);

public List ImportantBigFileContents
{
get
{
List fileStrongRef = _FileWeakRef.Target as List;

if (fileStrongRef == null)
{
using (StreamReader r = new StreamReader(_FilePath))
fileStrongRef
= ParseImportantData(r);

_FileWeakRef.Target = fileStrongRef;
}

return fileStrongRef;
}
}
 
Say I had a large chunk of external data that would be handy to keep in memory, but really isn’t used very often (or maybe it is used a bunch in bursts). This is exactly what the weak reference object is good for. In the code above, I am storing the parsed version of some “Important Data” in a WeakReference variable. What this means is that when someone tried to access the ImportantBigFileContents, the parsed data may or may not still be in memory.
So first, we try and pull the reference out of the _FileWeakRefobject. If that is null, we load the data from the file, parse it, store it and hand it back. Otherwise we hand back what we got from the weak reference variable. So this means that sometimes, the data will be in memory, but other times the code will have to go out and reload it. This doesn’t make sense to do in all cases (or even in many cases), but if the data is accessed in bursts, and you really didn’t want to keep it in memory all the time anyway, this gives you what you need with very little extra work (the garbage collector does your management for you).
Now, there is a couple of common tear-your-hair-out mistakes that can be made when using weak references. See if you can tell what is wrong with the code below (and remember, the garbage collector might run at any time):
public List ImportantBigFileContents
{
get
{
if (_FileWeakRef.Target == null)
using (StreamReader r = new StreamReader(_FilePath))
_FileWeakRef.Target = ParseImportantData(r);

return _FileWeakRef.Target as List;
}
}
 
Figure it out? Yup, the garbage collector could run during this property access – cleaning up the memory for this weak ref object just as we were about to hand it back:
public List ImportantBigFileContents
{
get
{
if (_FileWeakRef.Target == null)
using (StreamReader r = new StreamReader(_FilePath))
_FileWeakRef.Target = ParseImportantData(r);

/* Garbage colllector could run right here. Whoops!*/

return _FileWeakRef.Target as List;
}
}
 
So always remember to pull the reference out into a strong (or regular) reference before you do any manipulation or checking – otherwise, stuff could change out right from under you.
Well, that’s all for an intro to the weak reference object. For you Java developers out there, you actually have an equivalent (and it works almost exactly the same) – the WeakReferenceclass.

C# WPF Tutorial – Print Queues And Capabilities [Intermediate]

We have taken a look at printing in WPF twice before here at SOTC – first with a simple tutorial on just getting somethingprinted, and then a more complex one on pagination. Today we are not going to focus much on the printing side of things, but more on the printer side. For example, how do you get a list of the printers available on the system? Or their capabilities? If you need the answers to those questions, then this is the tutorial for you.
Today, we will be creating a little sample application that finds all the printers on your system (both local and network printers) and lists them out. When you pick a particular printer, you will get a list of the supported page sizes for that printer. Once you pick a page size, you can then print a test page to the chosen printer at the chosen page size (and you can even pick landscape or portrait). Oh, and did I mention that we never have to show the standard print dialog for any of this?
Example App Screenshot
Ok, to start off we first have to get a hold of the list of printers attached to the system – well, actually, the list of print queues attached to the system. These are not necessarily physical printers (for example, if you have a PDF Printer installed on your system), but they do represent something you can print to. Doing this is actually pretty easy – you just have to know where to go. First, we need to add the System.Printing dll as a reference to our Visual Studio project, since most of what we need resides in that dll. Once we have that, we want to get a hold of the local printer server – which really couldn’t be any easier:
var server = new PrintServer();
 
By default, when you create a new PrintServerinstance, it connects to the local print server. There are other constructors on PrintServer that take things like a path to a different machine (in case you wanted the print server, for, say, some central network system), but for today, all we care about is the local server.
Now that we have the the print server, to get all the available print queues, we need to call the method GetPrintQueues. This method has a number of different signatures to make it easy for you to get the queues that you want. The no argument version of the function will generally do what you need – it will return all the queues that are attached directly to that print server (in this case, any printer attached to your computer).
For our sample application, we want to go a little bit beyond that – we want to grab any network printers as well. To do this we need to use the GetPrintQueues call that takes an array of EnumeratedPrintQueueTypes:
var server = new PrintServer();
var queues = server.GetPrintQueues(new[] { EnumeratedPrintQueueTypes.Local,
EnumeratedPrintQueueTypes.Connections});
The no argument version of GetPrintQueues is equivalent to calling this with just EnumeratedPrintQueueTypes.Local, and by adding EnumeratedPrintQueueTypes.Connections, we get network printers.
Now that we have a collection of PrintQueues, let’s take a look at what we can do with one. There is a whole bunch of stuff available off of PrintQueue– you can look at printer status, what jobs are currently queued, and all sorts of other things. At the moment, though, we are interesting in printer capabilities. To get the capabilities, you call the method GetPrintCapabilities, which returns a PrintCapabilities. The capabilities class covers everything from paper size, to duplexing, even down to if the printer supports automatic stapling. Oh, and an important thing to note – the PrintCapabilities class is in the ReachFramework dll, so to do anything with capabilities, you have to add that dll to your Visual Studio project references.
var server = new PrintServer();
var queues = server.GetPrintQueues(new[] { EnumeratedPrintQueueTypes.Local,
EnumeratedPrintQueueTypes.Connections});

foreach (var queue in queues)
{
Console.WriteLine(queue.Name);
var capabilities = queue.GetPrintCapabilities();
foreach (PageMediaSize size in capabilities.PageMediaSizeCapability)
{ Console.WriteLine(size.ToString()); }
Console.WriteLine();
}
 
The code above will print out, for each printer, every paper size that that printer is capable of handling. For example, on my system:
Send To OneNote 2007
NorthAmericaLetter (816 x 1056)
NorthAmericaTabloid (1056 x 1632)
NorthAmericaLegal (816 x 1344)
ISOA3 (1122.51968503937 x 1587.40157480315)
ISOA4 (793.700787401575 x 1122.51968503937)
ISOA5 (559.370078740158 x 793.700787401575)
JISB4 (971.338582677165 x 1375.74803149606)
JISB5 (687.874015748032 x 971.338582677165)
JapanHagakiPostcard (377.952755905512 x 559.370078740158)

HP Color LaserJet CP2020 Series PCL 6
NorthAmericaLetter (816 x 1056)
NorthAmericaLegal (816 x 1344)
NorthAmericaExecutive (695.811023622047 x 1008)
ISOA3 (1122.51968503937 x 1587.40157480315)
ISOA4 (793.700787401575 x 1122.51968503937)
ISOA5 (559.370078740158 x 793.700787401575)
JISB4 (971.338582677165 x 1375.74803149606)
JISB5 (687.874015748032 x 971.338582677165)
NorthAmerica11x17 (1056 x 1632)
NorthAmericaNumber10Envelope (395.716535433071 x 912)
ISODLEnvelope (415.748031496063 x 831.496062992126)
ISOC5Envelope (612.283464566929 x 865.511811023622)
ISOB5Envelope (665.196850393701 x 944.88188976378)
NorthAmericaMonarchEnvelope (371.905511811024 x 720)
ISOA6 (396.850393700787 x 559.370078740158)
 
So that covers enough of the basics that we can start putting together the example application. First, we have most of the code behind:
using System;
using System.Printing;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;

namespace PrintQueuesExample
{
public partial class Window1 : Window
{
PrintQueueCollection _Printers;

public Window1()
{
_Printers = new PrintServer().GetPrintQueues(new[] {
EnumeratedPrintQueueTypes.Local, EnumeratedPrintQueueTypes.Connections});

InitializeComponent();
}

public PrintQueueCollection Printers
{ get { return _Printers; } }

private void PrintTestPageClick(object sender, RoutedEventArgs e)
{
//TODO: Print Test Page
}
}

public class PrintQueueToPageSizesConverter : IValueConverter
{
public object Convert(object value, Type targetType,
object parameter, System.Globalization.CultureInfo culture)
{
return value == null ? null :
((PrintQueue)value).GetPrintCapabilities().PageMediaSizeCapability;
}

public object ConvertBack(object value, Type targetType,
object parameter, System.Globalization.CultureInfo culture)
{ throw new NotImplementedException(); }
}
}
 
Nothing new here – this is just a reorganization of the code that we have already covered, getting it into a form that can be easily used by WPF controls (a public collection of the print queues, a converter to get from a print queue to a collection of PageMediaSizes). Now for some XAML:
<Window x:Class="PrintQueuesExample.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:PrintQueuesExample" x:Name="This"
Title="Print Queue Example" Height="300" Width="300">




A bunch of text.

<Ellipse Width="30" Height="50" Canvas.Left="200"
Canvas.Top="10" Fill="Blue" />
<Rectangle Width="100" Height="10" Canvas.Left="175"
Canvas.Top="150" Fill="Red" />
<TextBlock FontSize="100" Foreground="Green"
FontWeight="Bold" Canvas.Top="150"
Canvas.Left="20">
{ }














<ListBox x:Name="_PrinterList" DisplayMemberPath="Name"
x:FieldModifier="private" Grid.Row="1"
ItemsSource="{Binding ElementName=This, Path=Printers}" />
<TextBlock Text="Available Page Sizes for Selected Printer" FontSize="14"
Grid.Row="2" Margin="0 5 0 0"/>
<ListBox Grid.Row="3" x:Name="_SizeList" x:FieldModifier="private"
ItemsSource="{Binding ElementName=_PrinterList, Path=SelectedItem,
Converter={StaticResource printQueueToPageSizesConverter}}" />

<RadioButton Content="Portrait" x:Name="_PortraitRadio" Margin="0 0 10 0"
x:FieldModifier="private" IsChecked="True" />


<Button Grid.Row="5" HorizontalAlignment="Right" Content="Print Test Page"
Click="PrintTestPageClick" />

 
Walking through this XAML, at the top we have some resources – notably the PrintQueueToPageSizesConverter and a Canvas. It might seem a little odd to have a Canvas in resources, but here it is the contents of our test page (the page that will get printed when a user clicks on the “Print Test Page” button). By putting it in the resources, we get the benefit of defining it in XAML, without the downside of it actually being in the visual tree of the Window.
Past that, we get to the meat. We have a ListView bound to the collection of print queues (Printers) that we created in the C# code. We then have a second ListView that is bound to the selected item in the first list view, using the PrintQueueToPageSizesConverter. This way, we get the collection of available PageMediaSizes for the selected print queue.
Then we have two radio buttons for portrait and landscape – these aren’t attached to anything, we will just be querying their values when the user clicks “Print Test Page”. And finally, we have the “Print Test Page” button, which is hooked to the method PrintTestPageClick.
Now the only thing we haven’t covered so far is how to take a print queue and some selected configuration information, and actually print something. For that, we have the contents of the PrintTestPageClickmethod:
private void PrintTestPageClick(object sender, RoutedEventArgs e)
{
var queue = _PrinterList.SelectedItem as PrintQueue;
if (queue == null)
{
MessageBox.Show("Please select a printer.");
return;
}

var size = _SizeList.SelectedItem as PageMediaSize;
if (size == null)
{
MessageBox.Show("Please select a page size.");
return;
}

queue.UserPrintTicket.PageMediaSize = size;
queue.UserPrintTicket.PageOrientation = _PortraitRadio.IsChecked == true ?
PageOrientation.Portrait : PageOrientation.Landscape;

var canvas = (Canvas)Resources["MyPrintingExample"];
canvas.Measure(new Size(size.Width.Value, size.Height.Value));
canvas.Arrange(new Rect(0, 0, canvas.DesiredSize.Width,
canvas.DesiredSize.Height));

var writer = PrintQueue.CreateXpsDocumentWriter(queue);
writer.Write(canvas);
}
 
To set up custom settings for a print job, you want to modify the UserPrintTicker on the print queue you want to print to. The UserPrintTicket is what will be looked at when the time comes to print, and is in fact what the standard print dialog modifies as the user changes setting in the dialog. So here, we want to set the PageMediaSize property to the selected size, and the PageOrientationproperty to the selected orientation.
One important thing to note – just because a printer is capable of a particular paper size does not mean that it currently has a tray filled with that type of paper. Choosing a paper size that a printer supports but does not have any of is valid, and the end result varies depending on the printer. Some printers will print on the nearest possible size or their default size, others will wait until the user puts in the correct size paper. Unfortunately, there isn’t a way (that I know of) to query what types of paper a printer has at this very moment – all the print capability stuff is about what a printer can potentially do.
Ok, back to printing out the test page. We grab the canvas out from the resource dictionary, and measure and arrange it according to the chosen paper size. Then we use a static method on PrintQueue called CreateXpsDocumentWriter to create an XpsDocumentWriterfor the print queue we want to print on. It is this XpsDocumentWriterthat we can hand our canvas to to print – and lo and behold, printed output:
Sample Printout
Well, that is it for this quick introduction to PrintServers, PrintQueues, and PrintCapabilities. You can grab the Visual Studio project for the example application below if you would like to poke at the printers on your own computer.
Source Files:

C# WPF Tutorial – Dynamic Data and the TreeView [Intermediate]

One WPF control that we haven’t taken a look at here is the TreeView. Well, no more! Today we are going to rectify that, as we build an application that not only uses the TreeView, but also dynamically loads data into it on demand. We are going to cover a couple other new topics as well, including HierarchicalDataTemplatesand CompositeCollections.
So what are we building? A pretty simple app that pulls the tree hierarchy of categories and images from GamingTextures and displays it in a TreeView. Gaming Textures has a couple of calls that we can make to get lists of base categories and then the children for each category – so we will be making a web request on demand to get the children for a category, parsing the resulting JSON into C# objects, and then adding those items to the tree view.
For example, we start out with the list of base categories:
App Screenshot 1
When an item is expanded, we send off a request for the children:
App Screenshot 2
And once we have the children, we display them (complete with helpful tooltips!):
App Screenshot 3
Ok, so how do we do this? Well, it is time to find out! Let’s start with some simple XAML for the basic window layout:
<Window x:Class="WpfTreeView.TreeViewWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:sotc="clr-namespace:WpfTreeView"
Title="Tree View Example" Height="300" Width="300">

<TreeViewItem Header="Categories" x:Name="_ImageTree"
x:FieldModifier="private">
<TreeViewItem TextBlock.FontStyle="Italic"
Header="Loading..."/>


 
This gives up a basic layout that looks like this:
App Screenshot 4
Just by looking at that code snippet, you have probably already figured out the basics of using a TreeView. You just populate it with TreeViewItems. The Header property on TreeViewItem is the content that will appear for that item, and any children of the TreeViewItem will appear as children in the tree.
The “Loading…” tree view item is just there as a placeholder – as you might suspect, when the items actually load, we will be replacing that item. So let’s take a look at how to load those items:
public partial class TreeViewWindow : Window
{
public const string BaseUrl = "http://www.gamingtextures.com";
public const string QueryURl = BaseUrl + "/Callbacks/query.php";

public TreeViewWindow()
{
InitializeComponent();
var wc = new WebClient();
wc.OpenReadCompleted += BaseCategoryReadCompleted;
wc.OpenReadAsync(new Uri(QueryURl + "?QType=AllBaseCats"));
}

private void BaseCategoryReadCompleted(object sender,
OpenReadCompletedEventArgs e)
{
if (e.Error != null || e.Cancelled)
{
((TreeViewItem)_ImageTree.Items[0]).Header =
"Error Getting Base Categories";
return;
}

_ImageTree.Items.Clear();
_ImageTree.ItemsSource = Category.DeserializeJson(e.Result);
}
}
 
So when the application starts up, we immediately go off and try and load the list of base categories. This follows pretty much the same pattern. If there is an error with the web request, we replace the text “Loading…” with the error message:
Tree View App Error Screenshot
But what if we do get the data back correctly (which hopefully we do)? What do we do then? Well, we clear that “Loading…” item out of the tree view, and then we deserialize the JSON – which means we have to take a look at the Category class:
public class Category
{
private bool _Loaded = false;

public int IDCategory { get; set; }
public string CatName { get; set; }
public string CatDescription { get; set; }
public CompositeCollection Children { get; set; }

public Category()
{
Children = new CompositeCollection();
Children.Add(new TextBlock() {
Text = "Loading...", FontStyle = FontStyles.Italic });
}

public static List DeserializeJson(Stream stream)
{
var json = new DataContractJsonSerializer(typeof(List));
return json.ReadObject(stream) as List;
}
}
 
The method DeserializeJson takes a stream and deserializes it as a List of Category objects. The deserialization process fills in the fields IDCategory, CatName, and CatDescription. In addition, when a new Category instance is created, we fill the Children collection with a “Loading…” TextBlock. We will see how this is used in a moment.
So now we have a collection of Category objects, but that isn’t enough to display them in the tree view correctly. In fact, if we try to right now, we will get something that looks like this:
Tree View App Misssing Template
We have to add a data template to the XAML to get the categories to look correct:
<Window x:Class="WpfTreeView.TreeViewWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:sotc="clr-namespace:WpfTreeView"
Title="Tree View Example" Height="300" Width="300">

<HierarchicalDataTemplate DataType="{x:Type sotc:Category}"
ItemsSource="{Binding Path=Children}">
<TextBlock Text="{Binding Path=CatName}"
ToolTip="{Binding Path=CatDescription}" />



<TreeViewItem Header="Categories" x:Name="_ImageTree"
x:FieldModifier="private">
<TreeViewItem TextBlock.FontStyle="Italic"
Header="Loading..."/>


 
Here we are using a HierarchicalDataTemplate for the categories. By setting the DataType property to the type Category, we ensure that this type of template will be used anytime that a Category instance appears. The ItemsSource property gets bound to the children of the category (i.e., the Children property – which at the moment just holds the text “Loading…”. Finally, the content of the template is what will be used for the header of the tree view item – and here we just make a TextBlock whose text is the category name and whose tooltip is the category description.
So now with all that work, you will get an application that looks like this:
Tree View App Category Template
Ok, now we want to actually load the category children. The first step is to get notification that the user actually expanded a category. To do this, we add a handler on the window for all TreeViewItem Expandedevents:
AddHandler(TreeViewItem.ExpandedEvent, 
new RoutedEventHandler(TreeItemExpanded), true);
 
The Expanded event gets fired when a TreeViewItem is expanded. By setting up this handler, the method TreeItemExpanded will get called for any Expanded event for any TreeViewItem in this window.
private void TreeItemExpanded(object sender, RoutedEventArgs e)
{
var item = e.OriginalSource as TreeViewItem;
if (item == null)
{ return; }
var cat = item.DataContext as Category;
if (cat == null)
{ return; }
cat.LoadChildren();
}
 
So when this method gets called the original source will be the TreeViewItem being expanded. If the DataContext of that item is a Category instance, then we need to load the children (and so we call LoadChildren):
public void LoadChildren()
{
if (_Loaded)
{ return; }

_Loaded = true;
var wc = new WebClient();
wc.OpenReadCompleted += CategoryReadCompleted;
wc.OpenReadAsync(new Uri(TreeViewWindow.QueryURl
+ "?QType=NextCatChildren&IDCat=" + IDCategory));
}
 
If we have already loaded the children for this category, don’t do anything. Otherwise, set that flag to true (we are loading them now!) and send off a new web request. This request will return any child categories for this category:
private void CategoryReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
if (e.Error != null || e.Cancelled)
{
((TextBlock)Children[0]).Text = "Error Getting Category Children";
return;
}

var list = DeserializeJson(e.Result);
_ActualChildrenCount += list.Count;
Children.Insert(0, new CollectionContainer() { Collection = list });

var wc = new WebClient();
wc.OpenReadCompleted += ImageReadCompleted;
wc.OpenReadAsync(new Uri(TreeViewWindow.QueryURl
+ "?QType=NextImgChildren&IDCat=" + IDCategory));
}
 
So when the web request returns, we do the same type of thing as we did when loading the base categories. If there was an error, we replace the “Loading..” text with an error message. Otherwise, we deserialize the result into a list of category objects. We then add this collection to the children – and this is where the CompositeCollection starts to come in handy.
You might be wondering what in the world a CompositeCollection is. Well, it allows you to have a collection of both items and other collections of various types – and when it is used as an ItemsSource, the content is flattened out into a single list for display. For instance, we now have a collection that contains a TextBlock and a separate collection of Categories. So at this point, the app looks something like this:
Only child categories loaded
Ok, but now that we have the category children, it is time to get the image children. At the end of CategoryReadCompleted, you probably noticed the new web request being sent off – this is the request for the image children. When that returns, it will hit this code:
private void ImageReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
if (e.Error != null || e.Cancelled)
{
((TextBlock)Children[1]).Text = "Error Getting Category Children";
return;
}

Children.RemoveAt(1);
var list = GTImage.DeserializeJson(e.Result);
_ActualChildrenCount += list.Count;
Children.Add(new CollectionContainer() { Collection = list });

if (_ActualChildrenCount == 0)
{ Children.Add(new TextBlock() { Text = "No Children" }); }
}
 
Same type of error cases here as in the other two read completed handlers. If the read did complete, we remove the “Loading…” TextBlock from the children, and we deserialize the stream – except this time we are getting back a collection of GTImages:
public class GTImage
{
public int IDImage { get; set; }
public string { get; set; }
public string { get; set; }

public string IconPath
{
get
{
return TreeViewWindow.BaseUrl
+ "/Images/image.php?IDImage=" + IDImage;
}
}

public string ThumbnailPath
{
get
{
return TreeViewWindow.BaseUrl
+ "/Images/image.php?IDTFS=3&IDImage=" + IDImage;
}
}

public static List DeserializeJson(Stream stream)
{
var json = new DataContractJsonSerializer(typeof(List));
return json.ReadObject(stream) as List;
}
}
 
The GTImage class is pretty simple – the fields getting set by the deserializer are IDImage, Name, and Description.
So now our categories are getting both child categories and child images. But currently our GTImage class is template-less, which means that the app ends up looking like so:
No template for images
So it is time to break out that template:


<Image Source="{Binding Path=IconPath}" Width="16"
Height="16" Margin="0 2 2 2" />
<TextBlock Text="{Binding Path=Name}"
VerticalAlignment="Center" />


<Image Source="{Binding Path=ThumbnailPath}"
Width="64" Height="64" Margin="0 2 4 0" />
<TextBlock Text="{Binding Path=Description}"
VerticalAlignment="Center" />



 
Just like with the Category template, we set the DataType property to make it so that this template is applied for every instance of GTImage. Past that, it is some pretty standard use of WPF controls. A StackPanel to lay out the icon image and the name, and another StackPanel in the ToolTip to lay out the larger image and the description.
And that is it! Now the app looks like the screenshots at the top of the tutorial. Here is all the code together in a single block:
<Window x:Class="WpfTreeView.TreeViewWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:sotc="clr-namespace:WpfTreeView"
Title="Tree View Example" Height="300" Width="300">

<HierarchicalDataTemplate DataType="{x:Type sotc:Category}"
ItemsSource="{Binding Path=Children}">
<TextBlock Text="{Binding Path=CatName}"
ToolTip="{Binding Path=CatDescription}" />



<Image Source="{Binding Path=IconPath}" Width="16"
Height="16" Margin="0 2 2 2" />
<TextBlock Text="{Binding Path=Name}"
VerticalAlignment="Center" />


<Image Source="{Binding Path=ThumbnailPath}"
Width="64" Height="64" Margin="0 2 4 0" />
<TextBlock Text="{Binding Path=Description}"
VerticalAlignment="Center" />






<TreeViewItem Header="Categories" x:Name="_ImageTree"
x:FieldModifier="private">
<TreeViewItem TextBlock.FontStyle="Italic"
Header="Loading..."/>




using System;
using System.Collections.Generic;
using System.IO;
using System.Net;
using System.Runtime.Serialization.Json;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;

namespace WpfTreeView
{
public partial class TreeViewWindow : Window
{
public const string BaseUrl = "http://www.gamingtextures.com";
public const string QueryURl = BaseUrl + "/Callbacks/query.php";

public TreeViewWindow()
{
InitializeComponent();
AddHandler(TreeViewItem.ExpandedEvent,
new RoutedEventHandler(TreeItemExpanded), true);

var wc = new WebClient();
wc.OpenReadCompleted += BaseCategoryReadCompleted;
wc.OpenReadAsync(new Uri(QueryURl + "?QType=AllBaseCats"));
}

private void BaseCategoryReadCompleted(object sender,
OpenReadCompletedEventArgs e)
{
if (e.Error != null || e.Cancelled)
{
((TreeViewItem)_ImageTree.Items[0]).Header =
"Error Getting Base Categories";
return;
}

_ImageTree.Items.Clear();
_ImageTree.ItemsSource = Category.DeserializeJson(e.Result);
}

private void TreeItemExpanded(object sender, RoutedEventArgs e)
{
var item = e.OriginalSource as TreeViewItem;
if (item == null)
{ return; }
var cat = item.DataContext as Category;
if (cat == null)
{ return; }
cat.LoadChildren();
}
}

public class Category
{
private bool _Loaded = false;
private int _ActualChildrenCount = 0;

public int IDCategory { get; set; }
public string CatName { get; set; }
public string CatDescription { get; set; }
public CompositeCollection Children { get; set; }

public Category()
{
Children = new CompositeCollection();
Children.Add(new TextBlock() {
Text = "Loading...", FontStyle = FontStyles.Italic });
}

public static List DeserializeJson(Stream stream)
{
var json = new DataContractJsonSerializer(typeof(List));
return json.ReadObject(stream) as List;
}

public void LoadChildren()
{
if (_Loaded)
{ return; }

_Loaded = true;
var wc = new WebClient();
wc.OpenReadCompleted += CategoryReadCompleted;
wc.OpenReadAsync(new Uri(TreeViewWindow.QueryURl
+ "?QType=NextCatChildren&IDCat=" + IDCategory));
}

private void CategoryReadCompleted(object sender,
OpenReadCompletedEventArgs e)
{
if (e.Error != null || e.Cancelled)
{
((TextBlock)Children[0]).Text = "Error Getting Category Children";
return;
}

var list = DeserializeJson(e.Result);
_ActualChildrenCount += list.Count;
Children.Insert(0, new CollectionContainer() { Collection = list });

var wc = new WebClient();
wc.OpenReadCompleted += ImageReadCompleted;
wc.OpenReadAsync(new Uri(TreeViewWindow.QueryURl
+ "?QType=NextImgChildren&IDCat=" + IDCategory));
}

private void ImageReadCompleted(object sender,
OpenReadCompletedEventArgs e)
{
if (e.Error != null || e.Cancelled)
{
((TextBlock)Children[1]).Text = "Error Getting Category Children";
return;
}

Children.RemoveAt(1);
var list = GTImage.DeserializeJson(e.Result);
_ActualChildrenCount += list.Count;
Children.Add(new CollectionContainer() { Collection = list });

if (_ActualChildrenCount == 0)
{ Children.Add(new TextBlock() { Text = "No Children" }); }
}
}

public class GTImage
{
public int IDImage { get; set; }
public string Name { get; set; }
public string Description { get; set; }

public string IconPath
{
get
{
return TreeViewWindow.BaseUrl
+ "/Images/image.php?IDImage=" + IDImage;
}
}

public string ThumbnailPath
{
get
{
return TreeViewWindow.BaseUrl
+ "/Images/image.php?IDTFS=3&IDImage=" + IDImage;
}
}

public static List DeserializeJson(Stream stream)
{
var json = new DataContractJsonSerializer(typeof(List));
return json.ReadObject(stream) as List;
}
}
}
 
Hope this tutorial was an informative introduction to the TreeView and HierarchicalDataTemplates. As always, you can grab the Visual Studio solution below if you want to play around with the code. 
Source Files:

C# WPF Tutorial – Writing a Single Instance Application [Intermediate]

Today we are going to be taking a look at how to build a single instance application in WPF. Not a single instance in thissense, but in the sense that you can only run one instance of the application at a time. Generally, you can run as many instances of an app at once (at least until you run out of resources). For instance, Notepad. You can run Notepad a dozen times, and you will get a dozen separate Notepad windows, and a dozen separate lines in “Task Manager” that read “notepad.exe”. Killing one of those lines just kills one of those Notepad windows, and the rest live on happily.
On the other hand, you have an application like Firefox. At any given time, there should only be one line in Task Manager that reads “firefox.exe”. This is because every time you hit the Firefox shortcut, or double click the executable, instead of running a new instance of the app, the running instance gets a message (which is how Firefox knows to open a new browser window).
So why would you as a developer write an app that behaved in this way? The most common reason has to do with resources – your application needs an exclusive lock on some resource. That resource could be anything from a hardware device to a file on disk. But if your app needs an exclusive lock, you better not let other instances run, because those other instances will fail.
Ok, so how do we do this in .NET (and, more specifically, WPF)? It actually isn’t that bad (.NET fortunately has a useful built-in class that we get to use), but it does take some drastic changes to the default structure of a WPF application.
To get started, go and create a new WPF Visual Studio project. By default, it comes up with two main items in the solution – “Window1.xaml” (which I renamed to ExampleWindow.xaml”) and “App.xaml”. Both of these also have their respective code behind files. So first off, do something you have probably never done before – delete “App.xaml” and “App.xaml.cs”. We won’t be needing them, because we will be doing our own Application creation.
Now create a new class (I called it “ExampleApplcation”). This will be our application. The two main pieces of logic that this class needs to have are for showing the main window and for processing command line arguments. The first piece will only happen once – the initial application start up. The second, however, will happen every time a user tries to run the app (and we will see how that works in a moment). Take a look at the code:
using System.Windows;

namespace SingleInstanceExample
{
public class ExampleApplication : Application
{
public ExampleWindow MyWindow { get; private set; }

public ExampleApplication()
: base()
{ }

protected override void OnStartup(StartupEventArgs e)
{
base.OnStartup(e);

MyWindow = new ExampleWindow();
ProcessArgs(e.Args, true);

MyWindow.Show();
}

public void ProcessArgs(string[] args, bool firstInstance)
{
//Process Command Line Arguments Here
}
}
}
 
OnStartup will only get called once – the very initial application startup. So it is here that we make a new window and show it. We also process the command line arguments, and add a flag saying that these are the arguments to the first instance. We will be calling ProcessArgsfrom somewhere else when the user tries to start other instances of the app.
Ok, so that code would get a window off the ground – but what calls that code? Well, for that, we need another class (and this class will actually hold the entry point for the application). The important thing for this class is that it has to derive from WindowsFormsApplicationBase. To get that class, you actually need to add a special reference to your Visual Studio project – “Microsoft.VisualBasic.dll”. Don’t ask me why this class is stuck in that dll – that is just where it happens to be.
using System;
using System.Linq;
using Microsoft.VisualBasic.ApplicationServices;

namespace SingleInstanceExample
{
public sealed class SingleInstanceManager : WindowsFormsApplicationBase
{
[STAThread]
public static void Main(string[] args)
{ (new SingleInstanceManager()).Run(args); }

public SingleInstanceManager()
{ IsSingleInstance = true; }

public ExampleApplication App { get; private set; }

protected override bool OnStartup(StartupEventArgs e)
{
App = new ExampleApplication();
App.Run();
return false;
}

protected override void OnStartupNextInstance(
StartupNextInstanceEventArgs eventArgs)
{
base.OnStartupNextInstance(eventArgs);
App.MyWindow.Activate();
App.ProcessArgs(eventArgs.CommandLine.ToArray(), false);
}
}
}
 
It is in this class that all the magic happens. Every time the application is run, it enters the Main method, creates a new instance of this class, and calls Run. If it is the first instance, this will cause OnStartup to get called, and everything goes from there. If it is a not the first, OnStartupNextInstance gets called on the already running instance, and the instance that was just started shuts down.
It really is as simple as that. The command line arguments for subsequent instances are even right there in the handy StartupNextInstanceEventArgs.
By default, this will only work for multiple instances of the exact same build of an application. If you need this to work across multiple builds, there is one more step you have to take. In the AssemblyInfo.cs file of your project (generally under the “Properties” folder”) you have to add a GUID for your assembly. This GUID is what will be checked against when Windows sees if it is allowed to start another instance of the app. When there is no GUID explicitly set, Visual Studio generates a new one every time you build (which is why without this change, the single instance manager will only work for other instances of the exact same build). You will want your AssemblyInfo.cs to look something like this (of course, you want to use your own GUID):
using System.Reflection;
using System.Runtime.InteropServices;

[assembly: AssemblyTitle("SingleInstanceExample")]
[assembly: AssemblyProduct("SingleInstanceExample")]
[assembly: GuidAttribute("1A6236B4-8CD1-4c76-86FD-F5352330D190")]
 
That is it for writing a single instance application in .NET and WPF. The code for the example (and the associated Visual Studio solution) can be found in the zip file below. 
Source Files:

C# – Getting from WPF to a Metafile & onto the Clipboard [Intermediate]

So in the world of Windows, there is this horribly awful and horribly useful thing called a metafile (sometimes called WMF, sometimes EMF). In general terms, it is essentially a transportable list of GDI commands for drawing an image or set of images. Applications like Microsoft Word use it for transferring collections of graphical objects to other applications through the clipboard or drag & drop. It has been around for quite a while, and so is supported as a standard clipboard/drag&drop format by many applications.
However, as soon as we enter the WPF world, we have a problem. WPF knows nothing about GDI – you can’t convert from a WPF Visual into a list of GDI commands. So the very basic infrastructure of a metafile no longer meshes with the way WPF works. But don’t give up yet!
While it isn’t possible to go from a WPF visual to GDI commands, it is possible to go from a visual to a bitmap. And, thankfully, a bitmap can be placed inside of a metafile. Now, be warned, this is not a perfect solution – part of the usefulness of a metafile is that since it is just a list of GDI commands, it is (in many ways) a vectored image. By just sticking a bitmap in the metafile, that whole vectored concept goes out the window.
You are probably wondering “if you already have a bitmap in your hands, why not just stick that on the clipboard, instead of going to all the effort of creating a metafile?” (if you weren’t, you should have been :P). The answer to that is two fold – one, sometimes applications deal with metafiles much better than a plain old bitmap (Microsoft Word in drag/drop, I’m looking at you!). Two, when a bitmap gets placed on the clipboard, any information about the DPI of that bitmap gets lost (because what gets placed on the clipboard is just the pixels – no header information is carried along). By using a metafile, information like DPI is kept with the image.
Ok, enough talk and complaining. Let’s look at some code. It actually isn’t that hard to do – it is just a matter of knowing what to do in the first place:
public static bool PlaceElementOnClipboard(UIElement element)
{
bool success = true;
Bitmap gdiBitmap = null;
MemoryStream metafileStream = null;

var wpfBitmap = MakeRenderTargetBitmap(element);
try
{
gdiBitmap = MakeSystemDrawingBitmap(wpfBitmap);
metafileStream = MakeMetafileStream(gdiBitmap);

var dataObj = new DataObject();
dataObj.SetData(DataFormats.Bitmap, gdiBitmap);
dataObj.SetData(DataFormats.EnhancedMetafile, metafileStream);
Clipboard.SetDataObject(dataObj, true);
}
catch
{ success = false; }
finally
{
if (gdiBitmap != null)
{ gdiBitmap.Dispose(); }
if (metafileStream != null)
{ metafileStream.Dispose(); }
}

return success;
}
 
That’s the top level view of what we are doing. First, we create a RenderTargetBitmap out of the given UIElement. Then we convert that bitmap (the WPF kind) into the GDI kind (a System.Drawing.Bitmap). Once we have the GDI bitmap, we can create a metafile (or, in this case, create a MemoryStream containing the metafile).
Once we have everything created, it is time to create a DataObject and populate it. We push in both the GDI bitmap and the metafile stream to make sure that if an application supports one format, but not the other, the data on the clipboard will still be useful. After we have a populated data object, all that is left is to push it onto the clipboard, and then clean up after ourselves.
Creating the RenderTargetBitmap and the System.Drawing.Bitmap isn’t that interesting – we have done it before here (you can check out thistutorial), but here is the code anyway:
private static RenderTargetBitmap MakeRenderTargetBitmap(UIElement element)
{
element.Measure(new System.Windows.Size(double.PositiveInfinity,
double.PositiveInfinity));
element.Arrange(new Rect(new System.Windows.Point(0, 0),
element.DesiredSize));
RenderTargetBitmap rtb = new RenderTargetBitmap(
(int)Math.Ceiling(element.RenderSize.Width),
(int)Math.Ceiling(element.RenderSize.Height),
96, 96, PixelFormats.Pbgra32);
rtb.Render(element);
return rtb;
}

private static Bitmap MakeSystemDrawingBitmap(RenderTargetBitmap wpfBitmap)
{
var encoder = new BmpBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(wpfBitmap));
var stream = new MemoryStream();
encoder.Save(stream);

var gdiBitmap = new Bitmap(stream);
stream.Close();
stream.Dispose();

return gdiBitmap;
}
 
The interesting code is converting from the System.Drawing.Bitmap to the Metafile:
private static MemoryStream MakeMetafileStream(Bitmap image)
{
Graphics graphics = null;
Metafile metafile= null;
var stream = new MemoryStream();
try
{
using (graphics = Graphics.FromImage(image))
{
var hdc = graphics.GetHdc();
metafile= new Metafile(stream, hdc);
graphics.ReleaseHdc(hdc);
}
using (graphics = Graphics.FromImage(metafile))
{ graphics.DrawImage(image, 0, 0); }
}
finally
{
if (graphics != null)
{ graphics.Dispose(); }
if (metafile!= null)
{ metafile.Dispose(); }
}
return stream;
}
 
The gist here is that we pull the HDC (the Handle for Device Context) out of the Graphics object for the System.Drawing.Bitmap and use it to make a new metafile on top of a new memory stream we just created. Then we get the graphics object for the new metafile, and draw on it as much as we want (although in this case, all we want to do is draw the bitmap). Once we are done, we clean up, and are left with a MemoryStream that holds the metafile.
There you go! We successfully took a WPF element and got it onto the clipboard in a (rasterized) metafile. One random other thing to note before you grab the code file below and go on your way – don’t forget to add a reference to the System.Drawing dll in your Visual Studio project – that is where the Metafile, Graphics, and Bitmap classes are defined.

C# WPF Snippet – Reliably Getting The Mouse Position [Intermediate]

If you’ve worked for a while in WPF doing any kind of complicated user interface, you may have noticed that while WPF has some great methods for getting the mouse position relative to an element, there is one serious problem – the returned position is sometimes wrong. Yeah, I know it is hard to believe, but it is true. When I first ran across the problem, I tore out my hair for the better part of a day trying to find what was wrong with my code – only to eventually figure out that this is a known issue with WPF.
This problem is actually even documented on the MSDN page about the standard WPF function to get mouse position. The following quote is taken verbatim from the MSDN page on Mouse.GetPosition:
During drag-and-drop operations, the position of the mouse cannot be reliably 
determined through GetPosition. This is because control of the mouse (possibly 
including capture) is held by the originating element of the drag until the drop 
is completed, with much of the behavior controlled by underlying Win32 calls.
The problem is more widespread than just drag-and-drop, though. It actually has to do with mouse capture (as the quote states) – and so anytime that an element is doing something funky with mouse capture, there is no guarantee that the position returned by Mouse.GetPositionwill be correct.
The issue also applies to the GetPositionfunction on MouseEventArgs, which available through all the standard mouse events. Even one of their suggested workarounds for the issue during drag-and-drop (using the GetPositionfunction on DragEventArgs) has the exact same problem. They really need to remove that workaround from their list – figuring that it didn’t work either was another few hours of hair tearing pain.
Ok, but enough complaining about what doesn’t work – time to figure out what does. The second workaround suggested on MSDN actually does work, which is P/Invoking the native method GetCursorPos. This is actually pretty easy to do, assuming that you know how to pull in native methods. So let’s take a look at the code:
using System;
using System.Runtime.InteropServices;
using System.Windows;
using System.Windows.Media;

namespace CorrectCursorPos
{
public static class MouseUtilities
{
public static Point CorrectGetPosition(Visual relativeTo)
{
Win32Point w32Mouse = new Win32Point();
GetCursorPos(ref w32Mouse);
return relativeTo.PointFromScreen(new Point(w32Mouse.X, w32Mouse.Y));
}

[StructLayout(LayoutKind.Sequential)]
internal struct Win32Point
{
public Int32 X;
public Int32 Y;
};

[DllImport("user32.dll")]
[return: MarshalAs(UnmanagedType.Bool)]
internal static extern bool GetCursorPos(ref Win32Point pt);
}
}
 
So what we want here is a method that works the same way as the WPF version, except that it is correct all the time. This means a method (in this case CorrectGetPosition), that takes in a Visual as the argument to get the mouse position relative to that Visual. What does this method do? Well, first, we have to create our own special point struct, which I named Win32Point here. This is because outside of WPF, mouse positing is dealt with in terms of pixels, and so the coordinates are integers (not doubles, like the WPF point struct). Then we pass in a pointer to the struct to the native method GetCursorPos. This will fill the stuct with the current cursor coordinates. The code used to pull in this native method shouldn’t look to surprising – while it is not common.
Once we have the raw cursor position, we need to convert it to something that makes sense in the WPF world. This is where the handy PointFromScreenmethod is useful. This method converts a screen position into WPF coordinates relative to the visual. And that is it! The value returned by PointFromScreen is the correct cursor position.
One important thing to note about using the results of GetCursorPos in WPF. You should never use those values directly, because the values are in system pixel coordinates, which are meaningless to WPF (since WPF uses DIU, or Device Independent Units, instead of pixels). Using them directly will cause a subtle problem that you won’t notice until you run your application on a system that has a DPI setting other than 96 (this is because 1 pixel = 1 DIU when working on a 96 DPI screen). Before using the result, you should always pass it through something like PointFromScreen (WPF does the translation between screen pixels and DIUs deep inside that method).
Now really, was that that hard? As you might have been able to tell from my tone at the start of the tutorial, this WPF issue really irked me. But oh well, hopefully they fix it in the next version of WPF.