Xamarin Forms – Customizing Tabbed Page Header and Title on Windows Phone


In this post I will show how to customize the title and header section of the Tabbed Page control, when running on Windows Phone. On WP this is the pivot control being used, which is quite different from the tabs used on both iOS and Android.

Let’s say you wish to just change the font or completely restyle the title or the header showing the sections. You can create the complete page as a native Windows Phone XAML page with a pivot control and let that render. This gives complete control of the page and the pivot control, but makes the sacrifice of reducing how much code is shared.

The basic elements of the pivot is illustrated in Figure 2. With Xamarin Forms, the content area renders the content of the selected page. The title and header uses the default pivot title and header templates which is basically a TextBlock showing the Title property of the data context. Lets modify this and change the font and make the title text larger than the header.

Figure 2 - Pivot Control Elements

Figure 2 – Pivot Control Elements

First create a class that derives from TabbedPage in the shared project / PCL – it may be as simple as the following:

On any page you wish to use the styled one, derive from StyledTabbedPage instead.

In the Windows Phone project, create the class StyledTabbedPageRenderer that derives from TabbedPageRenderer. In addition, put the ExportRenderer attribute outside the namespace that instructs Xamarin to use the class when rendering the StyledTabbedPage control:

If we look at the definition of the TabbedPageRenderer on Windows Phone we can see it is deriving from the Pivot control:

When the instance of StyledTabbedPageRenderer is created instead of the default TabbedPageRenderer this will be the pivot control itself. Here we can access the template properties. The OnElementChanged override provides one to modify the control after setup. The following changes the title and header by constructing a new data template for the TitleTemplate and HeaderTemplate properties:

Note that by default, the Title property itself of the StyledTabbedPage instance is the context for the title element, whereas the child page instance is the instance for the header element. This is the reason that Line 16 is simply a binding directly to the context, whereas the binding expression for the header template must have the Title path added (or any other property you want to write out in the header).

And finally we can see the result:

Capptain (or your app telemetry service of choice) with Xamarin Forms


We are increasingly developing applications targeting both iOS and Windows Phone using Xamarin Forms. In this case, when it comes to telemetry, we are interested in putting as much as possible in the shared code. Here is a way to incorporate Capptain into a Xamarin Forms app using DependencyService and a proxy on each platform. You can use the same approach with other telemetry services, as long as a native SDK is provided for the platforms you target.

Xamarin Insights is currently in preview, but looks very promising. You might want to check it out first and see if it suits your needs.

Define the Interface in the Shared Project / PCL

First, the interface through which the shared code will use, is defined in the shared project or PCL depending on which approach you are using in the Xamarin Forms solution. The interface can be as simple as a one-to-one mapping to the desired methods you wish to use on the original interface exposed in the Capptain SDK. Here is an example that supports registering activities and events:

Provide Platform Specific Implementations

Next we provide the platform specific implementations which basically just forwards a call to the SDK. Here it is for Windows Phone (install the Capptain NuGet package into the Windows Phone solution).:

For Android and iOS it is a little more tricky, since we will need to provide bindings to the Java and Objective-C libraries.

For Android, download the Capptain JAR file and use add it to a binding project with Build Action set to EmbeddedJar. Some errors will show up – I was able to handle this by removing the class CapptainNativePushToken (you will not be able to use the push features of Capptain) by adding the following to the Meta file:

The Android proxy class follows the same approach as the Windows Phone one, except any additional data you send along is provided in a Bundle on the Android platform. You will have to make a convertion from the dictionary to a bundle (I wrote an extension method for this).

Wire Up Platform Specific Configuration

Telemetry services usually require some configuration initialization and event wiring in app life cycle events. For Windows Phone we add the following in App.xaml.cs to Application_Launching and Application_Activated:

For Android this should suffice in the MainActivity class (remember setting up the manifest file as described in the documentation):

Use DependencyService to Access Implementations

And now, finally in our Xamarin Forms page we can start shipping data about usage or errors:

Automatically Register Each Xamarin Forms Page as an Activity

If you want to track navigating between pages in general an easy way is to subclass e.g. ContentPage and wire up some activity tracking:

And then let your page inherit from CapptainContentPage instead of ContentPage. By default this will register the activity using the type name, but you can override the GetPageActivityName method in your page to change this.

Dynamics CRM 2011/2013 – Re-register Plugin Assembly With New Strong Name Key


All CRM plugins must be signed with a strong key in order to be deployed (both for letting Dynamics CRM be able to handle assembly naming conflicts and if you deploy it to the GAC on the server). I have come across a couple of projects where the password for the strong name key file was stuck in the head of another person – and later forgotten – making it troublesome set up a new development machine to build and deploy a solution.

For good reasons, the password cannot be retrieved, so here is a simple approach to re-registering your plugin/workflows with all existing steps and images as defined in the .crmregister file preserved.

1. Delete the Assembly From Server

This can be done either via. the CRM Explorer in Visual Studio or the Plugin Registration Tool. Note that this will leave your VS solution completeley intact and will not change anything in the .crmregister file.

Using the CRM Explorer in Visual Studio – connect to the organisation, choose “Plug-in Assemblies” in the menu, right click the assembly and choose “Delete Assembly”:

deleteassembly

Using the Plugin Registration Tool – connect to to the organisation, right click the existing assembly and select “Unregister”:

deleteassembly2

2. Create the New Signed Name Key File

Right click the project and choose Properties. Go to the Signing tab and choose a new strong name key file.

pluginkey

 

3. Reset Id Attributes in the .crmregister File

Here comes the part where we tell the CRM server to consider this a new plugin with all the existing configuration defined in the .crmregister file.

Open the .crmregister file in the CRM Package Project:

crmregister

Change all Id="{some-guid}"  attributes to  Id="00000000-0000-0000-0000-000000000000" . This must be done for all types of elements in the registration file (solution, plugin, step, image and so on).

4. Deploy Solution to CRM Server

Hit F5 and let the solution be deployed to a CRM server. This will cause everything to be registered as new. Re-open the .crmregister file and all Id attributes will contain a newly generated GUID.

Application Insights – Browser Debug Streaming Data Not Showing


Application Insights is the new analytics platform from Microsoft, currently in preview and part of Visual Studio Online. I had the honor of giving a talk on Application Insights during Danish Developer Conference last week which was very exciting. I plan to write a walk through of the new service soon, but for now I will address a small issue people run into.

Application Insights has a great development feed that provides raw and instant analysis data that can be used to verify that you send the right data before shipping your app: aifeed So, you have a web solution, you added Application Insights using the Application Insights Tools extension (or the functionality included if you use VS2013 Update 2 RC) to your solution, and then inserted some JavaScript analytics code that you got from the Application Insights portal into your views that looks like the following:

And all psyched up, you hit F5 while, at the same time, you have the Application Insights portal open on the stream data page. Ready to see the logs of you browsing around your site, you see that nothing shows. Except for the two server logs that gently tells you everything is ok and that a telemetry session has started. If you have any server side event loggings these show up just fine, but any client side events and page views logged in JavaScript does not.

Configure Streaming Data for JavaScript Analytics

First of all – do check the Visual Studio Online Service Blog. Application Insights is in preview and service level is not guaranteed at this time.

The best way is to set up the JS to load the values from the  ApplicationInsights.config  file that is added to the solution when you add Application Insights via. Visual Studio. This file contains a two profiles by default- one for Development and one for Production. The Application Insights SDK correctly switches between these when you run in the solution in Debug and Release mode respectively. As you can see, the Development profile has the following element that ensures that server side events logged using the API, are sent directly instantaneously to the feed in debug mode:

Part of the .NET SDK, is the ConfigurationHandler type in the Microsoft.ApplicationInsights.Telemetry.Web namespace. This type generates the JavaScript code that sets up configuration client-side based on the profile that is active server-side. The ConfigurationHandler type implements IHttpHandler. First we need to add in the Web.config an endpoint that invokes the handler to generate the JavaScript code. Add the following to your web config file as a child of the <configuration><system.web> element:

Change your JavaScript analytics code to the following:

Run your solution again in Debug mode and voila. The configuration file is now used to configure the client-side as well. If you want to see the the generated JS that is injected, just navigate to http://localhost:xxx/ApplicationInsights.axd. (Note that if you create a new solution in VS2013 with Update 2 RC, and tick Application Insights to be added, this is how it will set up new solutions!)

You can also configure it purely client-side. The JS API is not very well documented (actually not at all) in terms of setting up profiles. And for good reason – the above method should be used, since any configuration done in JavaScript will have to be maintained separately. It will also not switch between debug and release automatically. However, here is how it would look like:

And that is it – happy debugging your analytics!

Azure Storage Services REST API Authentication from Windows Store and WP8 applications


The Azure Storage Client Library provides some good and easy access to Queue, Blob and Table services. The library requires ODataLib that unfortunately is not available in a version for Store and WP8 yet. The good thing is, we have WCF Data Services based REST API to access the services. The official documentation provides details on the available operations, how to authenticate etc. How to authenticate can be a bit troublesome to derive from the documentation when you are not using the storage client library. I decided to put together this walk through that enables you to connect to the storage services using the REST API from Store and WP8 applications.

Note that I will focus on the Table service, but this the same approach for Blob and Queue services, by replacing relevant formats using the official documentation.

Authentication Basics

All requests must be authenticated by adding two headers to the HTTP request:

  • x-ms-date – Coordinated Universal Time (UTC) timestamp of the request. The service rejects all requests received that are older than 15 minutes for security reasons
  • Authorization – specifies an authentication scheme, account name and a signature constructed from a Hash-based Message Authentication Code (HMAC) computed using SHA256 and Base64 encoding on the request contents

The date header is fairly straightforward. The authorization header has the following format:

  • Scheme can be SharedKey or SharedKeyLite – the shared key lite scheme is smaller in the contents that is hashed, and is the scheme required if you want to access the Table Service via. WCF Data Services, so we will go with this one here.
  • AccountName is your Azure storage name
  • Signature is the constructed HMAC from a signature string in UTF8 encoding

Basically, the signature is constructed in the following way (from the official documentation):

The signature string used to construct the signature varies depending on the type of service. Blob and queue services signatures are constructed using the same format, and the table service requires its own signature format. You can see these formats in the original documentation for both Shared Key and Shared Key Lite schemes. If we look up the signature format for the Shared Key Lite scheme for Table Service, the documentation states the following format should be hashed:

The CanonicalizedResource portion represents the resource targeted in the service. As with the general signature string format, this portion has the same format for blob and queue service requests, and Table service has its own format. Furthermore, for Blob and Queue service this portion is constructed differently for the Shared Key and Shared Key Lite schemes. For Table service the general rule is:

Lets construct a signature string for an authorization header used to request some data.

The documentation states that we can get an enumeration of all current Tables in the storage using a GET request using the request URI https://myaccount.table.core.windows.net/Tables. The signature string will be the following:

The (almost complete) headers that we want to add to the request is hereby:

Lets see how we can construct the HMACSHA256 signature from the signature string and query the  REST API.

Azure Storage REST API Authentication in Windows Store, WP8 and Portable Class Library

Calculating the HMAC256 is done differently depending whether it is a Store or a WP8 application or maybe a Portable Class Library that should support both. Let us assume we do a PCL, and wish to support Azure storage services in both platforms, that way we come across both. The goal is to show how to use the API, so I omit any logic that provides abstractions of the responses received. Thus we are just interested in getting the raw XML responses at this point. We will go and fetch a list of all tables in the Table storage at the given moment through the previously stated URI.

Create a new solution with a PCL project, a Windows Store project and a WP8 project. We will do requests using HttpClient  that is available on NuGet (see this post), that can be added to a PCL (or WP8 specific project.). As a result we may have a solution that looks like this:

  • AzureStorageSolution
    • AzureStorage.WP8
    • AzureStorage.Store
    • AzureStorage.PCL

Add the following interface to the PCL that allows the clients to get a raw response of the tables in the storage:

In a real solution we would most likely return some constructed model based on the raw response, but for this purpose we just provide the clients with the response XML directly.

We will delegate the actual computation of the signature to the Phone and Store applications. Add the following interface to the PCL that will be implemented by each:

The uriPath argument will take the CanonicalizedResource portion of the signature string as input (in a real solution you may want to create this automatically from the URI you are requesting).

Add an implementation of IAzureStorageService that will do the actual request to get the tables in an XML response format:

If you are doing Store or WP8 specific app all the code until now will work as well (and for Store you will not need the HttpClient from NuGet). Now lets construct the signature in each of the applications and use the PCL to retrieve data.

Windows Store

Create the following class that implements the IAzureAuthSignatureComputeStrategy interface in your Store project:

Note that Line 13 is your access key that you get from the Azure Management Portal. In a real solution, you may want to look for alternatives of how this should be stored.

Finally, we can connect and get the data from the REST API by injecting the compute strategy into the service class in our PCL:

 Windows Phone

Calculating the signature is a bit more straightforward, here is the strategy implementation:

And the use is exactly the same:

Here is an example of a response that shows only the “WADLogsTable” table is created (used by Azure Diagnostics):

And there you have it. With this you should be able to use your Azure Storage Services from your Windows Store and/or Windows Phone 8 applications. You should check out the Azure documentation on Authentication to see the signature format you need for Blob and Queue services. Furthermore all the operations available and corresponding URIs can also be found in the Azure Storage Services REST API reference.

Finally, if you are up for it, you can get the OData Client Tools for Store and WP8 and use this to provide an abstraction over the REST API access and mapping to entity objects.

C# – Unit testing classes with HttpClient dependence + using Autofixture


The HttpClient added in the .NET framework 4.5 is simply put awesome in combination with the new async/await in C# 5.0. Also if you are doing Phone development or creating a PCL that targets a platform where HttpClient is not available, do yourself a favour and get it from NuGet.

Im gonna go through a way of unit testing code that is dependent on HttpClient, e.g. for retrieving data from a service. It is a bit dirty, as we cannot use e.g. Moq because we are not working with interfaces and members are non-virtual, thus we cannot fully mock/stub the class. Thus, we cannot do something like this:

Fortunately an HttpClient can be parameterised with a custom handler for sending requests. A typical approach is to wrap the HttpClient in some implementation of an interface that closely resembles or is a subset of HttpClient. Similarly the System.Web.Http.HttpServer can also be used to setup a host in-memory for testing.

Here we use fake objects to enable the unit testing. In the end, we will do a version using Autofixture to automatically arrange the setup. It is based on this approach by Pablo Cibraro but im gonna take it a bit further to allow code that e.g. reads from a server to be tested through GetStringAsync.

The HttpClient has a constructor that takes a single argument of the type HttpMessageHandler that governs how the client sends requests. The fake HttpMessageHandler can be done similarly to the one provided by Pablo Cibraro in his post.

Now we can specify a test like this where we inject the test-ready HttpClient into the SUT that is of some type HttpConsumer.

 

At this point we can test our SUT in a number of ways e.g. by setting the status code in the HttpResponseMessage. Now what if the unit being tested performs GetStringAsync using the HttpClient and spits out a list of some elements it parses from the response data. GetStringAsync uses HttpResponseMessage.Content to get the response data where the Content property is of the abstract type HttpContent. We make a fake derived type of HttpContent that can return test specific response data.

Using this we can now test methods that reads data using GetStringAsync in the ways we want.

Now lets change it to use Autofixture. The benefit in this case is not significant as it requires some bit of configuration of the Fixture but the use of Autofixtures support for data theories with xUnit makes it a little better.

First we need to specify a strategy such that Autofixture selects the constructor of HttpClient that takes a single HttpMessageHandler argument. We create a strategy that always chooses the constructor that takes a single argument:

Now we create a customization that sets up a fixture for using the strategy when creating a new HttpClient instance:

Finally, before we head back to the testing, we create the custom attribute that initialises the test with the customization like this:

And finally we can test something like this:

Note that GetStringAync() will throw an exception if the status code is not OK (200) . This is set by default, but Autofixture initialises this to the first value “Continue” (100).

In the end – quite some (dirty) work to be able to test methods that are dependent on HttpClient. Personally, I prefer to take the other direction of wrapping HttpClient and let classes that gets or sends data communicate through this. However, some time it may be necessary to test methods that directly depend on HttpClient, and then its nice to be able to control the testing as presented. That being said, I encourage you to also look at System.Web.Http.HttpServer before going down this route.

 

Winning the Startup Weekend 2012 in Aalborg


The last week has been quite special. During the weekend a friend of mine and myself attended the Global Startup Weekend event in Aalborg, simultaneous with events all over the world. I showed up to be useful with my development skills for any project that sounded interesting. However, during the initial idea pitching I came up with my own idea to work on – an electronic math book format for tablets, with integrated interactive assignments. Whenever the students solve the assignments statistics would be collected for teachers in order to more easily perform student evaluations, and spot general issues in a class.

I assembled a team of 8 in total, and lead it through the weekend (and actually ended up letting my project leader skills grow rather than putting much of my development skills into use).
The result was that we won the Startup Weekend in Aalborg, received the prize of 10.000 kr. As a result, we have also entered the Global Startup battle against the winners of all the other events held around the world.

Me doing the final pitch of “Mathi”, that eventually won the Startup Weekend, in front of the judges and other participants

 

All the contestants in this global competition has uploaded videos on their startup project, and the top 15 ones that receives the most votes will continue further. Should you want to see our video and maybe hook us up with a vote before the 28th of November, you will find the video here:  http://bit.ly/QpsK3W (and I also appear right in the beginning of the video! :) – sorry about the speak only appearing in the left channel)

 

Update: the project has since received attention from various news sites:
Version2/Ingeniøren

Aalborg University Newspaper

BrainsBusiness

 

Custom pushpin icon for Windows Store Apps with Bing Maps and C#


While the Bing Maps SDK is out of Release Preview and made final along with the release of Windows 8, i find that the documentation is still lacking quite a bit. Here is a couple of examples of how you can customize the pushpin control by changing the displayed image overlay, without having to implement a seperate Control.

Custom icon for pushpin using XAML and C#

Add the following to the resources (e.g. in your XAML for the Page containing the Bing Maps Control):

        <Style x:Key="PushPinStyle" TargetType="Maps:Pushpin">
            <Setter Property="Width" Value="25"/>
            <Setter Property="Height" Value="39"/>
            <Setter Property="Template">
                <Setter.Value>
                    <ControlTemplate>
                        <Image Source="Resources/Images/MapStopPushPin.png" Stretch="Uniform" HorizontalAlignment="Left"/>
                    </ControlTemplate>
                </Setter.Value>
            </Setter>
        </Style>

Remember to set the width and height properties accordingly to match your own image.
Then add your pins to the Map control:
Pushpin p = new Pushpin();
p.Name = "NewPin";
p.Style = this.Resources["PushPinStyle"] as Style;
Location locationOfPin = new Location(57.052023, 9.917122);
MapLayer.SetPositionAnchor(p, new Point(25 / 2, 39));
MapLayer.SetPosition(p, locationOfPin);
MyMap.Children.Add(p)

And achieve the following:C# bing maps pushpin custom icon windows store app

Remember to correctly anchor your pin if you do not want the actual location to be in the centre of your image (e.g. if you use a needle like pin as in the illustrated example).
And thats about it :)

ABAP Reflection – Part One (The RTTS and Type Introspection)


Reflection provides the ability to examine, modify and create data types and behavior at runtime. Developers of C# (or Java) not familiar with this concept, have most likely encountered it at some point, or even used it, through the .NET or Java reflection library. Have you ever used the typeof operator or invoked the GetType() methods in C#? Then you have already encountered it first hand.

Recently I found the need for reflection in ABAP, all the way from type inspection to creation of new data types at runtime, which I found to be documented only very superficial. This (and the following) post is a result of my experiences working with reflection in ABAP. In this first post I will briefly introduce an overview of reflection in ABAP and how this can be used to inspect types at runtime. If you just want to get down with reflection in ABAP I suggest you skip the next section.

Why bother and why did I need it?

Reflection provides the developer with a number of significant opportunities that one might not realise immediately. Besides useful features such as checking whether two objects are of the same type at runtime, these types can also be further inspected for members, methods and so on. With reflection it is possible to see, whether an object has a member with a certain name at runtime or modify the value of a member not known at compile time.

In my case I needed it for a specific case, for an ABAP program being developed. The problem was to develop a program that would basically provide the following features for generating reports of data in the system:

  • Let the user choose freely any number of fields/columns between all SAP HCM Master Record database tables (basically all infotypes) as well as custom added database tables
  • Allow the user to specify search criteria on any of the chosen fields as well as choose which fields should be part of the resulting report generated

So basically a small program that would do something equivalent to a BI solution, however, in this case this was not an option. Considering the users freedom to choose any fields on many different database tables (with up to 80 columns each), and with large data sets in each table, the two primary concerns were performance  and how queried data should be kept in-memory for processing. Processing each database table separately  and writing intermediate results to some storage was not an option as each column could require its value to undergo some processing with interdependencies between values across tables. Also the results generated for the output report needed to be displayed as a coherent timeline for each employee in the results, requiring all column values across database tables to be displayed coherently together in each row for an employee with multiple time intervals.

My solution to this – and especially on minimizing the memory usage? Use reflection to create a new type at runtime – more specifically a new record data structure. This record would in this case be equivalent to a single row to be displayed in the resulting report grid and have a field for each column in a database table the user selected for display. I illustrated this below. This also added two benefits – first, being able to keep each field in memory with their respective datatype as defined in its database table and second, being able to use dynamic SQL in ABAP to only select the needed columns when querying the database – and as you might know, database access is usually the best place to start when performance is important, especially in SAP.

ABAP Reflection Memory Usage

Use of reflection to reduce memory usage

So that was one of my small moments of need for reflection in ABAP – lets get down to business and see what it is.

Reflection in ABAP through Runtime Type Services (RTTS)

In ABAP, reflection is provided through RunTime Type Services (RTTS). Basically this provides two main features – identifying types and descriptions at runtime, and creating types dynamically. More concretely RTTS is the combination of:

  • RunTime Type Identification (RTTI) - type identification and description (type introspection)
  • RunTime Type Creation (RTTC) - dynamic type creation

In this part, I will focus on RTTS in general as well as RTTI.

RTTS and Class Hierarchy

The RTTS is implemented as system classes that are available in all ABAP programs. There are only a few general principles to consider:

  • For each type kind there exists an RTTI description class
  • A concrete type is defined by its type object, and for every type, one type object exists at runtime
  • As a bi-implication, each type object defines exactly one type
  • Properties about a type are defined by attributes of the type object

So what this means is that each kind of type has a corresponding RTTS class (e.g. all elementary data types have one corresponding type class) and for each concrete type of that kind there will be a single object of that class at runtime (e.g. the elementary integer data type has one single type object that defines it). Lets get an overview of all of this.

All RTTS related classes are named CL_ABAP_*DESCR, depending on which type kind the class represents. CL_ABAP_ELEMDESCR is the description class for elementary datatypes, CL_ABAP_TABLEDESCR is for table types and so on. Illustrated below, is the complete hierarchy of type classes (that closely resembles the ABAP type hierarchy).

RTTS Type Hierarchy

RTTS Type Hierarchy

We can see that the type class hierarchy is slighty different than the ABAP type hierarchy, however, with less classes than types. This is a result of only type kinds having a corresponding type class. Specific details about a type are represented by the attributes on the object of that class. E.g. in ABAP we have many different kinds of internal tables – standard tables, hashed tables, sorted tables and so on. All of these are described by the same CL_ABAP_TABLEDESCR class, but at runtime they will be different by each having an object of the type CL_ABAP_TABLEDESCR, with attributes that describe whether it is e.g. a hashed table.

As a final note on RTTS in general, I want to clear up some possible naming confusion that appears to arise in general. In the old days, only type introspection was available, and not dynamic type creation. This was named RTTI in ABAP. Later RTTC became available  and both  were synthesized into RTTS. There are, however, no specific RTTI or RTTC classes. RTTS  is available through the previously mentioned type classes. We simply divide the interface of each class into RTTI and RTTC methods and attributes. So any method on any of the CL_ABAP_*DESCR classes that relate to type introspection can be categorized as an RTTI method and any method that relates to the creation of a new type object of some type kind will be categorized as an RTTC method.

RTTI Example

Lets get our hands a bit dirty and do some runtime examination of data types. The superclass CL_ABAP_TYPEDESCR, which is also the root of the RTTS type hierarchy, defines a set of (primarily RTTI related) static methods that are available for all the different type kinds. The two most used methods are DESCRIBE_BY_DATA and DESCRIBE_BY_NAME that each returns the corresponding type object for some data object at runtime. These are used by either by providing a reference to the data object in question or the relative name of the type, as input argument respectively. Note that for a reference to some data object (e.g. a reference to an integer or table) the DESCRIBE_BY_DATA_REF method must be used, and likewise for references to reference types the DESCRIBE_BY_OBJECT_REF method is available.

In the following example the RTTI methods are used to get the type object for the elementary integer data type. We declare a simple integer and retrieve its type object using both methods – first by passing the data object in question and secondly by using the relative name of the integer type in ABAP.

 DATA: lr_typeobj_for_i TYPE REF TO cl_abap_elemdescr,
       lv_an_integer TYPE i VALUE 20.

  lr_typeobj_for_i ?= cl_abap_elemdescr=>describe_by_name('i').

  lr_typeobj_for_i ?=
                  cl_abap_typedescr=>describe_by_data( lv_an_integer ).

So far so good – lets see what this type object can tell us about the integer type by inspecting the object referenced by lr_typeobj_for_i (which is the same object reference assigned in both assignments):

ABAP

The elementary integer type object

The type object tells us that the kind of the type object is an integer, it has no decimals and occupies 4 bytes. Nothing extremely exciting or surprising for this type, but nonetheless, it shows us the principle of a type object.

Lets go ahead and use the DDIC to get the type object for the data structure that is underlying of the database table for the pa0002 infotype. We use the CL_ABAP_STRUCTDESCR for accessing type information specific to a data structure:

  CONSTANTS: c_pa0002_relative_name TYPE string VALUE 'pa0002'.
  DATA: lr_typeobj_for_pa0002 TYPE REF TO cl_abap_structdescr.

  lr_typeobj_for_pa0002 ?= cl_abap_structdescr=>describe_by_name( c_pa0002_relative_name ).

ABAP SAP The PA0002 data structure type object

The PA0002 data structure type object

This time we can, amongst other things, see that this specific data structure occupies 988 bytes, and the CL_ABAP_STRUCTDESCR specific attributes defines the structure as a flat structure. Finall,y the COMPONENTS attribute provides an internal table that lists all the individual fields on the structure by their name, including the data type of each of these (by their relative name). By combining the DDIC and RTTS we are able to examine a database table, and see all of its columns and the data type of each column, at runtime. Furthermore we could use this information to get the type object for each of these columns.

These examples shows just a small piece of the RTTI features. One can do many more interesting type inspections, e.g. determining which attributes or methods are available on a class or concrete object at runtime.

Finally we can also create new data objects using the type objects. The following shows the creation of a new data structure of the pa0002 type previously obtained, and assigning a value to the pernr field on this structure:

  DATA: lr_new_pa0002_struct TYPE REF TO data.
  CREATE DATA lr_new_pa0002_struct TYPE HANDLE lr_typeobj_for_pa0002.

  FIELD-SYMBOLS: <ls_pa0002> TYPE pa0002.
  ASSIGN lr_new_pa0002_struct->* TO <ls_pa0002>.

  <ls_pa0002>-pernr = 12345678.

In this case we know the type and thus the field symbol used to access the individual fields of the structure created as any other structure. This, however, is not always be the case e.g. if we create a complete new data type at runtime using RTTC. In such case we would have to work with the generic programming parts of ABAP to assign a value to a field. We will get to see more advanced cases like this in the next part.

 

NetMeter tool


I have always been using a small tool to monitor my network usage for various reasons, but recently my favourite decided to take the step away from being a free tool. Instead of tracking down another, I decided to use this opportunity to start a new project of my own, and use it as an opportunity to take a look at WPF for the first time.
Currently it simply displays your network usage, but should only be considered an appetizer in its current state – I got some ideas lined up to give this some more sweet features!

Head over to the new page and try it right away if you want – nocture.dk/software/netmeter