Thursday, January 17, 2008

WebExceptions and you

Every now and then, one of my applications was throwing up an error of type 'System.Net.WebException: The request was aborted: The request was canceled'.

This happened during execution of a SOAP webservice call towards a backend system. Problem was, it was almost impossible to reproduce, happening as it did on irregular, non-deterministic moments. I only noticed it because of an automatic error reporter, which automatically mails any exceptions to a public Outlook folder.

After finding some time to investigate the issue, I found that it was apparently a known issue, related to the Keep-Alive property of the HTTP Request itself. Apparently, every now and then, IIS ate the webservice request, when Keep-Alive was true.

One possible solution was of course to disable Keep-Alive on the IIS level. While this would no doubt solve the problem, it is kinda overkill and can have negative implications on the performance of the application as a whole (Keep-Alive is there for a reason, you know).

A better solution is to disable Keep-Alive for the SOAP request only, and leave it on for the other requests. This was slightly complicated by the fact that - like most .NET users - we don't actually code our webservice consumption classes ourself, but use generated Proxy classes, obtained by either wsdl.exe or by adding a Web Reference in VS.NET. Luckily, by means of a simple subclass, we could dance around this problem and keep our code generation intact.


public class MyCustomService : MyGeneratedService
{
protected override WebRequest GetWebRequest(Uri uri)
{
HttpWebRequest request = (HttpWebRequest)base.GetWebRequest(uri);
request.KeepAlive = false;

return request;
}
}


After introducing these changes for each SOAP request, the intermittent WebException has dissapeared. Life is good.

Wednesday, September 12, 2007

Making perfect software in a non-perfect world

Ideally, every software project would be lavishly described and work in his own little world, having a dedicated database and no dependencies that slow you down. In such conditions, you can just sit yourself down, and work uninterrupted, quickly reaching Programming Nirvana.

Of course, no real project is ever like that, so the trick then becomes how do we work in an environment where your data is spread out over several applications, and these applications each have their own timetable and their own priorities. One of the possible techniques is called Stubbing.

Stubbing or Mocking is a technique that is much better described by the likes of Martin Fowler, and other high-profile architects but in short, its a way of first hiding away the implementation of a complex subsystem by only exposing the interface to those applications that will be using it and then code against a simplified (or dummy) implementation of that interface, while the 'real' application is being made.
This allows you to continue with the rest of your app, and come back to the subsystem
when you have successfully driven your users into a corner and forced them to do their work.

The concept of Stubbing seems to scare a lot of developers, because on the surface it seems to be something for huge projects and - lets face it - is often explained in needlessly complex terms. This is unfortunate because even in smaller projects, stubbing can be a very useful technique and doesn't have to be extremely complicated.

An example to illustrate.

You're tasked to implement a medium-sized SOA project, but you need to get some data from your local AS400 system. Unfortunately, the AS400 guy has just fallen off his roof and will be out for the next two weeks. So instead of twiddling your thumbs for two weeks, Stubbing will allow you to implement the other tasks at least and leave the implementation of the AS400 subsystem for later.

So what we need is essentially a system that is going to pretend to be an As400 and that allows us to "plug" this in dynamically (meaning, when the actual AS400 application becomes available, we don't have to change much code - preferably none!).

To implement this, we'll start by creating an Interface. Learn to love those, they are your best friend.

public interface IAs400Connector
{
PORT GetLines(string user, string from, string to);
}


Next step, create our 'stub' class and implement this interface.

public class StubAs400Connector : IAs400Connector
{
public PORT GetLines(string user, string from, string to)
{
string fileName = "Sample.xml";
string xml;
using (StreamReader streamReader = new StreamReader(fileName))
{
xml = streamReader.ReadToEnd();
}

return XmlPortSerializer.DeSerialize(xml);
}
}


When creating a stub class, its always a good idea to structure your stub data as close as you can to the original data. That way, you'll limit the work you have to do when you plug in the real class later on. In this particular instance, we assume that the AS400 developers can give us a DTD or a Schema beforehand, so we can easily generate some sample data which should be very similar to the real thing (in theory). Implementation then simply consists of reading the XML sample file, and deserializing it into a custom object (which has been previously generated by a tool such as XSD.exe).

The hard? part is mostly done now, all that remains is to implement a little Factory, so that the calling application doesn't have to worry about instantiating the right Connector. In fact, we'll go one step further and push that decision right to the configuration file.

public static class As400ConnectorFactory
{
private static IAs400Connector connector;

public static IAs400Connector GetAs400Connector()
{
if (connector == null)
connector = (IAs400Connector)Activator.CreateInstance(Type.GetType(Settings.Default.As400Connector));

return connector;
}
}


There's a few things going on here. First of all, we make use of the Properties feature of .NET 2.0, in which you can include dynamic,
strongly-typed properties in your project and have those populated by the calling application through means of a configuration file, as seen below. These settings can then be called in your application, through an (autogenerated) strongly-typed Settings class.



<configsections>
<sectiongroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
<section name="Project.As400Connector.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirepermission="false">
</section>
</sectiongroup>

<applicationsettings>
<project.as400connector.properties.settings>
<setting name="As400Connector" serializeas="String">
<value>Project.As400Connector.StubAs400Connector, Project.As400Connector
</setting>
</project.as400connector.properties.settings>
</applicationsettings></configsections>




Second, we then use Reflection to instantiate the Connector specified in the configuration file.

connector = (IAs400Connector)Activator.CreateInstance(Type.GetType(Settings.Default.As400Connector));


If you're worried about using Reflection to create your class on every request, just make the connector static like I did, that way it'll get cached. Keep in mind that you'll need to reset the application then, if you want to force a change in connector. Of course, if you use this in a web context, changing the Web.Config will automatically reset your application anyway, so its a moot point :)

Now, you can merrily start designing your screens and grids and lists and whatnots, without stressing about the lack of data. Once your AS400 boys then finish their work, you code the 'real' Connector class, switch to it in your configuration file and test with the real data. If everyone has done their homework, the work left should then be minimal (mostly validating the length of the real data and taking out some inevitable quirks).

While this is obviously a very simple example, in larger projects nothing is stopping you from implementing a much more complex stubbing system, if needed.

Thursday, January 11, 2007

Not for publication

After performing a "Publish website" operation on my ASP.NET 2.0 web project, I got the following error upon browsing to the newly-deployed site:

CS0433: The type 'ProfileCommon' exists in both
'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET
Files\root\dc8cbf9e\f05d928b\assembly\dl3\2f0caf74\881e5468_eec7c601\App_Code.DLL'
and 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET
Files\root\dc8cbf9e\f05d928b\App_Code.r_z1bmqm.dll '


which is of course quite nonsensical as the two directories listed above are in fact the same.

After some investigation, I found that the root of the problem was that I had marked the website as "Precompiled" during the publish operation AND I had subsequently deleted the "PrecompiledApp.config" file upon completion...

Apparently, that little config file tells the Dotnet runtime that the file is already compiled, and as such shouldn't be compiled (again). Without that file, all kinds of conflicts arise between the precompiled files and the newly-compiled file.

So, in summary.

Publishing websites precompiled GOOD ^-^
Deleting PrecompiledApp.config afterwards BAD >.<

Tuesday, November 28, 2006

The Case of the Overzealous Profile

Asp.NET 2.0 ships with a wonderful feature called "Profile", which is essentially a server-side replacement for cookies, most commonly used for persisting user settings. The advantage of the server-side solution being of course that you're not dependant anymore on users actually supporting cookies and/or deciding to delete them in a sudden fit of paranoia. ***

Another nice thing about the Profile is that it follows the Provider model and ships with a pre-working SQL Server implementation.

In a recent ASP.NET application, we made some extensive use of the Profile in our searchscreens, however since we were working on an Oracle database, we had to create our own custom Provider. This is however absolutely no problem since DotNet makes it really easy to do so, and to plug your resulting custom provider into your application without any hassle.

However, we did notice a performance degradation upon use of the Profile, and we occasionally even got Oracle errors, when the application received a lot of strain. When we finally found some time to investigate the issue, we discovered that the SetPropertyValue method of the Provider was being called on each and every visit to the page and this for every available property in our Profile, resulting in at least 15-20 extra database calls per page access. Needless to say, this did not help performance one bit.

After digging around some more, it turned out that the Profile mechanism has in fact an auto-load and auto-save mechanism built-in. In other words:
  • at the start of the page cycle, all properties get read from the data store
  • at the end of the page cycle, all properties get persisted to the data store

Fortunately, there was also an option to turn this automatic save off, this by a simple modification of the web.config.



< profile automaticSaveEnabled="false">

Not quite as well-documented as I'd hoped though...


*** Note that the Profile can however be enabled for anonymous usage, in this case you will need cookies again for this to work. :)

Monday, November 27, 2006

How to take Control... of your controls

When creating custom ASP.NET controls, there's the question of which baseclass to use for your control.
Generally speaking, you have three big categories:

  • Rendered Controls
  • Container Controls
  • Inherited Controls

Of those choices, inherited controls are the simplest, in that they simply inherit from an existing control (custom or otherwise). An example would be inheriting from a DropDownList and adding some custom code to pre-populate it. Or a ConfirmButton control, which inherits from Button, and automatically renders a "are you sure?" javascript alert when you click it. Inherited Controls are to be used when you want all the functionality of a certain control, and then just "something" extra.

Rendered Controls are created by inheriting from the WebControl class. They operate by directly injecting HTML into the output stream. As such they bypass the normal event lifecycle for any of its containing subcontrols. The result is that a Rendered Control will render much faster than a Container Control, but does not support event postbacks 'out-of-the-box'. While it is possible to provide these events, by means of implementing IPostBackEventHandler and IPostBackDataHandler, and then providing custom code to handle these events, this does raise complexity a bit and tends to create quite cluttered code. It is advisable then, to only use Rendered Controls, for simple, mostly static controls.

Container Controls, as their name implies, function as containers or collections for other controls. They are implemented by inheriting from CompositeControl. Your subcontrols are dynamically added to the Control collection of your CompositeControl, and they will then go through their normal event lifecycle, allowing you to customise them as needed. While this functionality comes at a price (somewhat worse performance compared to Rendered Controls), it allows you to build your webcontrols in a true object-oriented fashion, and apply all your usual Design Patterns to them. It also makes it much easier to inherit from your own controls, and in doing so, create an reusable and extensible library of server controls.

The choice between those three types of controls is - as is usually the case - dependant on the requirements of your project, but generally speaking, Composite Controls will be your best friend the moment your controls become a bit more complex and require event interaction.

Tuesday, February 14, 2006

Sometimes it really is _that_ easy

Sharepoint has a Page Viewer webpart, allowing you to integrate external applications into your Portal. Very useful, but sometimes this presents problems of its own.

Case in point, one of our clients used this webpart to integrate his .NET web application in a WSS site, and had some trouble with his Session variables not showing up, when viewed through the webpart. Logical, if you think about it, because the site is being referenced through an Iframe of another site, so your cookies are not going to be accessible. Its not a bug, its a feature! In this case, a security feature, because how would you like it if any external site could just frame you (pun intended) and nose around in your cookies?

Faced with the prospect of having to rewrite his entire application without Session variables, the client's developer started considering jumping off the nearest bridge. That is, until I pointed out to him that he could simply switch the session state handling in his .NET application by modifying the web.config.


< sessionState mode="InProc" stateConnectionString="tcpip=127.0.0.1:42424"
sqlConnectionString="data source=127.0.0.1;user id=sa;password="
cookieless="true"
timeout="20"
/>


By setting the "cookieless" attribute to true, ASP.NET will automatically (and with no impact on the application!), switch from using cookies to track session ids to appending a session identifier to the Url. No fuss, no need to rewrite your application!

Problem solved, client happy. If only everything was so easy.

Tuesday, January 17, 2006

Alive

Yes, I'm still alive.

I'll post a proper update soon, but in the meantime, this might be a good read.

Tale of conversion from an ASP.NET 1.1 application to 2.0

Monday, November 14, 2005

Old Dog, New Trick

Want to implement "Created By" fields for your custom tables in your SharePoint environment?

Fairly easy to do, once you know how to ;)

The trick is to use a not-so-well-kown SQL Server method called "SYSTEM_USER", this will automatically be filled with the SQL Server authenticated user, whether thats a SQL User or a Windows User.

So, first make a CreatedBy field on your table.
Then create an insert trigger, and use the SYSTEM_USER method to fill the field.

CREATE TRIGGER MyTable_FillCreatedBy
ON dbo.MyTable
FOR INSERT
AS
UPDATE MyTable SET CreatedBy = SYSTEM_USER WHERE ID = (SELECT ID FROM INSERTED)


Then of course, you need to correctly configure your authentication towards your custom database.
In case of SQL Authentication, just set the correct user and config in your application configuration file (such as web.config).
In case of Windows Authentication, its somewhat more complicated.
First, you need to change your connection string to Integrated Windows Authentication.

Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=MyDb;Data Source=127.0.0.1

Make sure the 'Identitity Impersonate' tag is also set to true!

Then you need to add the necessary user accounts as logins to your SQL Server. Its generally speaking a good idea to work with a *group*, give that group access to your database and add all users that need access to your group.

The nice thing is that SYSTEM_USER is still going to fill in the individual logged-in user even though only the group has access to the SQL Server.

One big gotcha with Windows Authentication is that it won't work when your SQL Server is on a different server than your Sharepoint (or more in general, IIS) server, at least not when your network is configured with NT Authentication as opposed to Kerberos. There are a few basic fixes for this:
1) Use SQL Authentication ;)
2) Setup Sharepoint to use Kerberos authentication (not so trivial)
3) Setup a 'trust' relationship between the two servers (fiddly)

Friday, October 14, 2005

Need a plumber?

Sometimes with all the glittering IDE's and the Autocomplete features in our development tools, its easy to forget that those too, are just (or have been) someone else's software project and are as such subject to the same deadlines, feature creeps and illogical decisions that our own projects sometimes experience.

Therefore, no matter how shiny a tool may be, always remember that it may have its own logic and ideosyncracies underneath. Because you don't know (and you probably don't *want* to know) what happened in the development lifecycle. Also, most products are really a hodge-podge of various other technologies, some of which were written by 3rd parties, and I'm sure had questionable documentation ;)

Self-proclaimed software guru Joel Spolsky called it the "Law of Leaky Abstractions" in one of his essays and that a pretty good analogy.

We recently met one of those leaky abstractions, in the follow-up of a BizTalk project in which a client had some encoding troubles consuming a .NET webservice from his PHP webserver. In the course of investigating the problem, we traced it all the way from a File Pickup port through an Orchestration schedule, to the Send Port and finally up to the (generated) webservice itself. Having determined that all of the previous steps didn't help in resolving the problem, we then went about the process to change the encoding in the webservice itself (even though its a less-than-optimal solution, because Biztalk regenerates that webservice when you deploy).

So we edited the Web.Config of the webservice, and changed the following line:
<globalization requestEncoding="utf-8" responseEncoding="utf-8"/>
to
<globalization requestEncoding="utf-16" responseEncoding="utf-16"/>

Save web.config, refresh the webservice client, look at the results.
And absolutely nothing changed...

When even a IISReset didn't help the problem (programmer's superstition...), we did some more research and suddenly found this:
http://weblogs.asp.net/tmarman/archive/2004/02/02/66476.aspx

So change that globalization tag all you want, change the encoding by other means, pin needles in a voodoo doll, its just not going to work, because apparently ASMX's are hardcoded to UTF-8...

Make sure you also read the first comment on this blogentry, as its apparently from some Microsoft employee who was involved in the process. ;)

So remember this when next time you wonder why a piece of software doesn't what you wanted. There's probably a leak somewhere ;)

Friday, October 07, 2005

Design vs Deadline

One of the most controversial topics between programmers is the discussion of "design vs getting things done". It takes place on many different levels, from the low-level data-access strategy (DataSets vs Typed Objects), to the higher-level architectural descisions of reusability, extensibility, etc.

The good thing about Microsoft .NET is (IMO) that for the first time - using Microsoft technologies - you can now sort of combine both. .NET allows you to create fully-fledged object-oriented, event-based, strongly-typed applications, which - from a design point of view - are infinitely much better constructed than most VB6 applications. Make no mistakes though, even in .NET its completely possible to write crappy applications, and unfortunately it gets done a lot.

The situation gets even more complicated when you add a product to the mix, lets take - as an example - SharePoint? ;)Sharepoint allows you to extend its out-of-the-box functionality with Webparts, which are really just a specific kind of Asp.Net user controls. Problem is, Sharepoint has its own logic, which makes a number of 'normal' sound design principles more problematic (client-side validation using Validators being one).

Again though, with a little effort, it is possible to create a well-structured application even in Sharepoint. As an example, in a recent project, we needed to create a webpart, which would, when the user put the page in Design Mode, automatically display a fairly complex user administration part. This admin part would then manipulate those values and have to pesist them as properties in the WebPart.

Now, speaking in design terms, two obvious choices presented itself.

1. Implement the functionality in the WebPart itself, resulting in a flat less-complex structure, but also quite quickly leading to spaghetti code, especially if the desired functionality is moderately complex. Plus, from a design POV, the level of abstraction is practically zero, which will not encourage reuse and/or extensions of the webpart.

2. Create a series of ASP.NET user controls, and use LoadControl on the top-level user control to add him to the control collection of the webpart. Its elegant, its sound design-wise, but it does raise the problem, how do we communicate between the 'layers'. Its the classic circular reference problem, webpart needs to know about user control to create it, and user control needs to know about webpart to give some information back to it. Circular references are solveable usually, but even if it was solved it would still leave you with a user control who somehow knows he'll be included in a webpart, which is again Bad Design.

So whats the solution then? Any experienced control builder in .NET will now be screaming it. *Events* of course. That wonderful mechanism, IMO one of the better features of .NET alltogether, that allows us to decouple layers from one another and create true plug-and-play usercontrols.

So, you set up the following situation:

- Webpart creates UserControl through LoadControl
MyCustomControl myControl = (MyCustomControl) Page.LoadControl("MyCustomControl.ascx");

- Webpart subscribes to the OnDataChanged event from the UserControl
myControl.OnDataChanged += new MyDataEventHandler( myControl_OnDataChanged );

- Webpart adds control to his own control collection
pnl = new Panel();
pnl.Controls.Add(myControl);
Controls.Add(pnl);

- Something happens in the control, and it raises the OnDataChanged event. Thats all the control needs to do, no 'upper references' needed.
private void changeButton_Click(object sender, System.EventArgs e)
{
if ( OnDataChanged != null )
{
OnDataChanged (sender, new MyDataEventArgs(e));
}
}
- The webpart receives notification of the event and does whatever it needs to do with it.
private void myControl_OnDataChanged(object sender, MyDataEventArgs args)
{
// do something useful
}

Thats all there is to it, the beauty of it is that you now have a user control which can be easily re-used in another webpart, or in another usercontrol or even directly in an aspx page.
Of course, there are a few SharePoint gotchas that even good design can't solve, but we'll leave those for another time ;)

Sunday, September 25, 2005

To fill the void

Welcome to (yet another) technology blog.

In here, I'll mostly be talking about my (mis) adventures in the land of technologies such as BizTalk, ASP.NET and SharePoint 2003. If the mood strikes me, I might even get a bit philosophical now and then, feel free to skip those parts.

What am I doing right now? I'm a Technical Analyst/Projectleader at Dolmen, a belgian software company. For the last few years, I've primarily focused on such areas as ASP.Net, Design Patterns, UML and BizTalk 2002 and 2004. Recently, Sharepoint 2003 has entered that list, and thats what I'm currently using the most.

I consider myself a generalist, meaning that I don't blindly focus on one technology, which - in my opinion - is not really possible anymore, since (for example) to know SharePoint you need at least a solid base of ASP.NET and Web UserControls knowledge (try making a Web Part without it...). Of course, its completely impossible to know everything about all products from the Microsoft family, so a little bit of specialization automatically occurs.

In the end, being a software guy is not about how good you are in certain technologies, what is important is your ability to adapt to new environments and products. With the current rate of new products/platforms/frameworks/whatever, by the time you 'dig' something completely, the RC1 of the next version usually rolls around the horizon anyway ;) So adaptability and flexibility are key. All of this is of course IMHO, allthough you can probably drop the 'H' :D

Anyhow, that was rambling number 1.

Sam