Quantcast
Channel: The Microsoft MVP Award Program Blog
Viewing all 788 articles
Browse latest View live

Friday Five - September 6, 2013


Visual Studio 2013 RC Released

$
0
0

Editor's note:  The following blog post was written by Visual Studio ALM MVP Martin Hinshelwood 

What’s new in Visual Studio 2013 RC with Team Foundation Server

As you may have noticed the Visual Studio team has just put out a Release Candidate to the log awaited Visual Studio 2013 and Team Foundation Server 2013.

If your computer is compatible then you want to immediately get to grips with the new features. I would recommend that you have a look at What’s new in Visual Studio 2013 Team Foundation Server Preview for two reasons. I am going to assume that you have seen the aforementioned features and it should give you some idea of the pace of features improvement you get by being on the same cadence as the TFS product team.

These are just my initial observations from conducting a little exploratory testing on features that we saw in the TFS 2013 Preview and those things that I knew and suspected were coming down the line. The best way to get a heads up is still to create an account on http://tfs.visualstudio.com as it is already ahead of the Release Candidate.

Visual Studio 2013 Team Explorer Enhancements

There have been repeated and increasing enhancements to the Team Explorer. Some of these enhancements have been small experiments and others have been large. Some have been successful and some result is continuous change as the product team evolve things trying to meet our needs. If only every team building software would innovate as often. If like me you take the latest drop at all times you will see the bounding progression of features and enhancements. If you don’t you will see the usual big leaps.

Visual Studio 2013 Team Explorer remembers your TFS Servers

I was surprised when I opened the connection dialog on my bran new OS with Visual Studio 2013 RC installed and saw a list of TFS servers that I recognised.


Figure: Visual Studio 2013 Team Explorer remembers your TFS Servers

It looks like the team has populated my list of servers with all of the instances fromhttp://tfs.visualstudio.com that I have permission for, and that’s a lot. I am not sure what happens when this list gets bigger than my screen but that’s for another day. I had forgotten that I had connected to some of these servers. What would be a nice enhancement to this would be to be able to set local servers that are synched. That way I can easily select local servers when I go onsite at customers.

New Team Explorer Home page

The new layout for the Team Explorer homepage is much more flexible and has way better extension points.


Figure: The new Team Explorer in 2013

Again we have the context of a single Team Project. While administrators may have preferred the old tree view users found it confusing and slow. The new interface added with 2012 has been streamlined and enhanced with a years’ worth of usability data.

The Project Section

The project session is topped with a set of useful links:

  • Configure Workspace – This takes you to a screen that allows you to easily configure your workspaces against this instance.
  • Web Portal – The Web Portal is THE way to access and work with much of the data in TFS 2013. Even more than in 2010 and in 2012 as we now have Test management right there in the web. In addition to Test Case management there are hubs for Agile Planning, Agile Portfolio Management, Work Item Tracking, Build Management, Code Management and now Reporting.
  • Task Board – Part of the Agile Planning Tools feature that was introduced in TFS 2012 the Task Board provides you a Scrum style board where Requirement types sit on the left and tasks flow through states associated with the Requirement from left to right. Typical states are To-do –> Doing –> Done.
  • Team Room – Each Team gets their own persistent chat and notification room where users can interact and be notified of Builds and work item changes dynamically. Way better than email.

Although Web Access is now the preferred way to access much of the data in TFS that does not mean that there are no other options. The following sections have been incrementally updated individually but here each of the important nodes use a flow layout so that they are just as accessible regardless of the size of the window. They are each subtly colour coded but the new piece is that many of them have a little ellipse in the bottom right of the button \ panel. If you click the ellipse you you get a drop down of menu options for that feature. Indeed these panels dynamically change depending on which source control you selected when you created your Team Project. TFS 2013 supports both TFVC and Git.

  • My Work – The My Work section gives you access to the up level features like Code Reviews, Suspend Resume and Task switching and focus features that you need to be on Premium or ultimate SKU’s to get. Few of these features yet work of your pick Git as your source control.
  • Pending Changes – A new view on the standard pending changes with a docked panel instead of a floating model dialog. You can now break it out of the UI and stick it anywhere you want.
  • Source Control Explorer – Only available for TFVC projects this gives you folder and branch access to your code. I have yet to delve into that UI
  • Work Items – Gives you access to the standard tree of queries. You can create flat, dependant or tree queries that show whatever columns that you like. There are some Team Explorer only features like opening queries in Excel or MS Project and turning Queries into reports.
  • Build – Want continuous delivery? This is your stop. Create both compilation, test and deployment builds that execute on demand, timed or triggered. A special feature added way back in 2010 allows you to pre-moderate your builds which lets you build first and reject check-ins for failed builds.
  • Reports – Your gateway to the Reporting Services reports that are available for your Process Template
  • Settings – The new settings page now seams like a Launchpad for the Web Portal. I will not miss those model dialog boxes….more power to the web…

These same features, well mostly, are available in Eclipse as well.

The Solution Section

The solution section, new in 2013, is awesome. It looks at the scope of your currently selected Workspace and lists all of the Solutions available in Source Control to open. Here I have nothing in source yet and I don’t have my workspace configured. I do believe that there is a limit to the number of solutions that will be listed, but I am not sure what that it.

Visual Studio 2013 Team Foundation Server Enhances

While the new features in Visual Studio are awesome I sometimes forget what they look like these days. Apart from the projects that I work on myself with the other MVP’s, the Rangers and just for fun I rarely get to play in Visual Studio. (sniff)I do miss it, they even (shock) have me coding in c# these days, but I never stop complaining about that.

So where do I play, well… sometimes in PowerShell but mostly in Team Foundation Server when I am doing technical stuff. Helping teams and organisations improve their processes is mostly not about tools. However when I need a tool I always turn to TFS. As the TFS Product Team are moving more and more towards agile themselves the product itself is getting better and better at delivering value in the agile space. Although there are many features that are based on reportability that is no longer the focus of the team and the new features concentrate on making your development process as slick as possible.

Team Rooms

Team Rooms are brand new in TFS 2013 and provide a kina cross between email notifications and persistent chat.

You can configure notification for various things including Build results and work item changes. The results pop into the window with a little ‘ding’ for other to be notified. If you are unable to get everyone in a physical team room then this is the next best thing. Those of you out there thinking ‘what’s the use of that crap’ should give it a try. Find it valuable or don’t as you like but the ability to chat and tag work items just by mentioning #2354 or a person with the usual @Youname mechanism makes the experience much more interactive.


Figure: Configure events for Team Rooms in 2013

I am looking forward to innovations and experiments here.

Agile Portfolio Management Enhancements

When I looked at What’s new in Visual Studio 2013 Team Foundation Server Preview I spent a lot of time on the Agile Portfolio Management features and even created a video walkthrough. Here I just want to go back and visit some of the areas that  have been improved.


Figure: Backlog View Pick list

First up is that pick list list that lets you ‘look up’ and ‘look down’. In the earlier version it was not colour coded, it did not have the current level first and the text was just the name of the Work Item in question. The new list is eminently more usable and understandable. Here we get more context; we get the colour of the work item type that we can subconsciously relate. There is also a subtle separator between the ‘current view’ and the alternative views. It was previously easy to forget which level you were at and thus where one had to go to get back to the orderable view. We had the ‘Backlog Items’ highlight on the left, but we had to look way the other side of the screen to figure it out. Now we can easily see where we are and where we are looking. Even the addition of the simple “to [other work item type]” test gives us much more of that context.

This to me is an embodiment of a small simple but extremely valuable enhancement to an existing feature that is only really valuable in short release cycles. In a long cycle it would never make it above the cut line.


Figure: Subtle directional chevron on Backlogs

If you do select another option, in this case I am looking up from ‘Backlog items to Features’ you get a subtle indication on the left as well as to where you are. The little colour coded chevron for “’Backlog items’ narrows at the top to signify that we are looking up. This gives other side of the screen the same information in a subtle enough manor as to not interrupt or clutter the display, but still conveying necessary information.

Mapping from Backlog Items to Features

Another incremental improvement is the ability to easily associate backlog Items with Feature (or whatever you have above the backlog that you are viewing).


Figure: Mapping from Backlog to Parent

Here we can turn on Mapping and a list of the parent items are listed on the right. You can then drag and drop your backlog items onto the required feature to create the associations that you want. This makes it way easier and more intuitive to work with the hierarchy.

Charting from Queries

One of the awesome features in TFS in the reporting, even if it is just incidental reporting when you are not actively trying to get traceability. In Visual Studio you can right click on a Query and select “Generate Report”. This feature would look at the fields that were available on the query and determine what sort of reporting was possible with those options. It would then let you build out both static and trend reporting in Excel using a macro. Well, as we move towards more of a cloud based infrastructure we need the same features but unfortunately, or fortunately, there is no Analysis Services in Azure. So what can we do?


Figure: Static Analysis reports in TFS 2013

The product team in superb incremental style have implemented the easy part first; Static Charting. They have created the ability to add charts to you query. To find the options head over to your work item queries and when you select a query you will note an extra tab added to the UI. Where we had only Results and Edit we now get Charts. Once on the charts tab you can create a new chart and select the chart type:

  • Pie
  • Bar
  • Column
  • Stacked bar
  • Pivot table

While this will never have parity with Excel there is much more value in this being just available in the UI. One you have selected your chart type you get to give it a name and then customise the data displayed. You need to first select the Grouping. This is the field (dimension) that you want to display the data by. After that you select the values (metric) to display.  I don’t hold out hopes for getting trend analysis by RTM of 2013 but if we are lucky some future sprint will bring that functionality.


Figure: Adding lots of charts

You can go ahead and add a bunch of charts giving you different views of the same data and creating a dashboard based on your query data. I love this option…

Multi-reorder of Column Options

As I was clicking through I notices a little nugget that I have no idea when it was added.


Figure: Reordering Columns

Maybe this was added in 2012 and I never noticed but you can, when ordering columns, select multiple columns and change their order together. I don’t know how many times I have moved each one individually and I hope this is a new feature if only to save face…

Creating Test Plans from the Web Portal

Although web based Testing was added in one of the updates to 2012 there were some serious limitations. We could not create Test Plans and needed to jump into MTM to perform many of those tasks.


Figure: Creating Test Plans from the Web Portal

Now with 2013 you can create a Test Plan directly in the web UI. You can add a name and configure the Area Path and Iteration Path that is relevant. If you want to edit the Test Plan you have to jump into MTM but the team have added a little bottom on the far right off the highlight above to jump strait to that page in the application.

Create Test Cases in a Grid view

Power users of Microsoft Test Manager have always called for productivity improvements. They were always used to working in Excel before MTM came along and some things are just easier there. Well the MTM team has been listening and they have added some new features to the web to make things easier.


Figure: Create test Cases in Grid View

You can now create Test Cases just like you once did in Excel. You can modify and add new at the same time and save as you go along. If you use to, or currently, create Test Cases in Excel and then port them to MTM you can now copy and paste them into here and save.

Open the Test Plan or Run Test in Microsoft Test Manager Client from Web

The last feature I want to highlight is the “Run using client” bottom that sends the selected tests to MTM for execution.


Figure: Launching MTM from the Web UI

In MTM you get data collectors like Video, Intellitrace, Event Log scraping, Code Coverage and Test Impact Analysis. Sometimes you want those things and this lets you jump into the right part of MTM and back to the web access making the integration a little bit more seamless.

Conclusion

Although I knew where some of them were, or where I expected them to exist these were just the few highlights of features that I feel that are important based on my customer engagements. There are a plethora of features in Visual Studio like Code Sense (kind of a heads up display for coding) that go to ALM productivity but I have not yet used them.

Remember that Team Foundation Server 2013 is production ready.

If you have the Preview you should upgrade and anyone on 2010or 2012 should seriously consider the features available. Remember also that you can still use VS 2010 and VS 2012 with TFS 2013.

About the Author

Martin Hinshelwood has worked with many customers in government, finance, manufacturing, health and technology to help them adopt and improve their use of Agile processes and techniques. Martin is a Professional Scrum Trainer as well as a Visual Studio ALM MVP and Visual Studio ALM Ranger. He writes regularly on http://nakedalm.com/blog, and speaks often on lean-agile and process improvement with Scrum and Visual Studio ALM.

Friday Five - September 13, 2013

$
0
0

SQL Server: Auto Statistics Cleanup

$
0
0

Editor’s note: The following post was written by SQL Server MVP Ami Levin

Abstract                             

SQL Server creates statistics objects automatically to enable the Query Optimizer to make the right plan
choices based on estimated costs. This is an intelligent and useful feature that takes away some of the burden of manually managing statistics. However, when these statistical objects start accumulating over time, they may start incurring unnecessary performance hits. In this article we will investigate this little-known issue and build a solution that will help you clean up your database.

Background

What are statistics objects in SQL Server?

Statistics objects are (surprise…) a collection of statistical information about data in a particular column or set of columns from a table or an indexed view. Statistics objects include a histogram of the distribution of the values in the first column and may include additional information about the correlation statistics (densities) of values among the columns.

For a detailed discussion of statistics, see http://msdn.microsoft.com/en-us/library/ms190397.aspx

What are statistics used for?

SQL Server’s query optimizer uses that statistical information to estimate the selectivity of a query. The selectivity is estimated based on the actual data value distribution from the statistics objects. This estimate of the expected number of rows for each operator of the query is used to estimate the associated costs of potential plan choices. This information is crucial to the query optimizer’s ability to choose the lowest
cost, optimal execution plan and run your workload at peak performance.

How do statistics get created?

Statistic objects are created automatically for every index created on the table. You can create additional statistics by using the CREATE STATISTICS statement. Statistics objects are so important for the operation of the SQL Server engine that Microsoft decided to help users by minimizing the need to manually create them. By default, the query optimizer will create statistics whenever it encounters a query for which a column’s distribution of data may have an impact on the plan operator costs. These statistics are known as “auto created statistics” or “auto-stats” in short. You can control this feature by setting
the database option AUTO_CREATE_STATISTICS using the ALTER DATABASE … SET statement. You can easily distinguish auto-stats by their unique naming
convention:



 
  
  
  
  
  

Figure 1 - Auto Stats in SSMS Object Explorer

Note: You can download the AdventureWorks 2012 database used for the examples in this article from http://msftdbprodsamples.codeplex.com/releases/view/55330.

The naming convention is actually quite simple – the _WA_Sys_ is s fixed prefix, the next number represents the column’s ordinal number in the table and the last section is a hexadecimal representation of the table’s object ID. If you prefer T-SQL, you can use the sys.stats DMV:

SELECTO.NameASTable_Name,

               S.NameASStat_Name

FROM     sys.statsASS

               INNER JOIN

               sys.objectsASO

               ONS.[object_id]=O.[object_id]

WHERE  S.auto_created= 1

               AND

               O.is_ms_shipped= 0

 

Should statistics be maintained?

As the data in the table changes, statistics become obsolete. Out of date statistics can fool the optimizer into choosing sub optimal plans which may significantly impact performance. Statistics can be explicitly updated by using the UPDATE STATISTICS statement. By default, SQL Server will update the statistics automatically whenever the data in the table has passed a certain change threshold, meaning
that enough values for the given index columns have changed, added or deleted. You can control this feature by setting the database option AUTO_UPDATE_STATISTICS using the ALTER DATABASE … SET statement. SQL Server chooses the sample size automatically based on the number of rows in the table. Sample size used to create the statistics may range from less than 1% for very large tables and up to 100%, or
a full scan, for smaller tables.

The Challenge

Auto stats accumulate over time

Since auto-stats are created implicitly and quietly behind the scenes, most DBAs are unaware of how many really exist. For every query that any client ever issued against the database, there may be a remaining statistics object created just for it. Even if it was a test query, issued once many years ago and never again. The statistics objects are persistent and will remain in the database forever until explicitly dropped. I’ve seen databases containing tens of thousands of such auto created statistics and one with > 300,000!

So what?

Statistics updates are expensive

The process of updating statistics consists of sampling a certain number of rows, sorting them and building a histogram of values with aggregate information per histogram step. If you run profiler and capture the SQL:BatchCompleted and the Showplan XML events and issue the following statement:

USEAdventureWorks2012;

GO

 

UPDATESTATISTICSsales.salesorderdetail

 

You will see that several plans are created, one for update of each statistics object, and each may look something similar to the plan
diagram in figure 2:

Figure 2 – Typical Statistics Collection Plan

You can see that it involves fetching data from the table into memory, segmenting it into steps, sorting and aggregating. For tables like SalesOrderDetail in our example, which are less than ~8MB, the statistics update doesn’t use sampling but scans the entire table. Of course, for larger tables only a small percentage of the rows is sampled by default. However, since each individual statistics gets updated independently of others, the costs add up as more and more objects need to be maintained. I’ve seen many cases where statistics updates for a single column took several seconds to complete. For some workloads, the default sampling is not good enough and a full scan sampling must be performed to produce accurate enough statistics which in turn lead to optimal plan choices. For those workloads, these costs increase dramatically.

Accumulating + Expensive = Trouble…

“A few seconds to complete” for a statistic update may not sound like much but for frequently modified OLTP systems, with many tables and columns, it may be detrimental. I’ve seen many cases where the DBAs resorted to switching off automatic update of statistics during daytime workload (at the risk of getting sub-optimal plans) and running custom scripts at off-peak hours to maintain statistics up to date.

And what do auto stats have to do with that?

Well, auto stats have to be maintained like any other statistics. It’s a price you must pay to get better plans which means better performance. However, what performance gains do these auto stats provide? For databases that have been around for a while, it is safe to assume that at least some auto stats exists which were created to serve a query pattern used long ago but no longer exists. Some of those statistics become redundant as data changes. Index tuning efforts over the years may have created indexes on columns which already had auto stats on them and now both are maintained.

The big question is – how do we know which one can we drop and which one should we keep?

The Solution

“Just drop ‘em all Jack”.

This may sound like a radical idea at first, but think about it. What do you have to gain or lose? Dropping all auto stats will place some temporary stress on the system. As queries come in, the query optimizer will begin recreating those statistics that we just dropped. Every query that adheres to a certain pattern that requires a statistics to be created, will wait. Once. Soon, typically in a matter of minutes for highly utilized systems, most of the missing statistics will be already back in place and the temporary stress will be over. But now, only the ones that are really needed by the current workload will be re-created and all the redundant ones just came off the expensive maintenance tab.

TIP: If you are worried about the impact of the initial statistics creation, you can perform the cleanup at off-peak hours and ‘warm-up’ the database by capturing and replaying a trace of the most common read only queries. This will create many of the required statistics objects without impacting your ‘real’ workload.

How many are redundant? There is no way to tell unless you try… The risk is very low, so why not see for yourself?

Run the following query to count how many auto stats you have in your database:

USE [Master];

GO

 

SET NOCOUNT ON;

 

-- Table to hold all auto stats and their DROP statements

CREATE TABLE#commands

       (

       Database_Name       SYSNAME,

       Table_Name              SYSNAME,

       Stats_Name               SYSNAME,

       cmd                            NVARCHAR(4000),

       CONSTRAINT    PK_#commands

              PRIMARY KEY CLUSTERED      (

                                                                   Database_Name,

                                                                   Table_Name,

                                                                   Stats_Name

                                                                   )

       );

 

-- A cursor to browse all user databases

DECLAREDatabasesCURSOR

FOR

SELECT[name]

FROM   sys.databases

WHERE  database_id> 4;

 

DECLARE       @Database_NameSYSNAME,

                        @cmdNVARCHAR(4000);

 

OPENDatabases;

 

FETCH NEXT FROMDatabases

             INTO@Database_Name;

 

WHILE@@FETCH_STATUS
= 0

BEGIN

-- Create all DROP statements for the database

       SET@cmd=   'SELECT       N''' +@Database_Name +''',

                                                               so.name,

                                                               ss.name,

                                                               N''DROP STATISTICS [''

                                                               + ssc.name

                                                                +'']''

                                                               +''.[''

                                                               + so.name

                                                               +'']''

                                                               + ''.[''

                                                               + ss.name

                                                               + ''];''

                     FROM                         ['     +@Database_Name+'].sys.stats AS ss

                                                               INNER JOIN ['

                                                               +@Database_Name+'].sys.objects AS so

                                                               ON ss.[object_id] = so.[object_id]

                                                               INNER JOIN ['

                                                              +@Database_Name+ '].sys.schemas AS ssc

                                                              ON so.schema_id = ssc.schema_id

                     WHERE         ss.auto_created = 1

                                           AND

                                           so.is_ms_shipped = 0';

--SELECT @cmd -- DEBUG

-- Execute and store in temp table

       INSERT INTO#commands

              EXECUTE       (@cmd);

-- Next Database

       FETCH NEXT FROMDatabases

              INTO   @Database_Name;

END;

GO

 

At this point, switch the query results output to “text” by clicking Ctrl + T or from the Query menu, select “Results to” and “Text”.

 

WITHOrdered_Cmd

AS

-- Add an ordering column to the rows
to mark database context

(

       SELECTROW_NUMBER()OVER    (

                                  PARTITION BY  Database_Name

                                  ORDER BY      Database_Name,

                                                Table_Name,

                                                Stats_Name

                                  )ASRow_Num,

                     *

       FROM   #commands

)

SELECT        CASE

                     WHEN   Row_Num= 1

                     -- Add the USE statement before the first row for the
database

                     THEN   REPLICATE(N'-',50)+NCHAR(10)+NCHAR(13)

N'USE['+Database_Name+'];'

+NCHAR(10)+NCHAR(13)

                     ELSE   ''

              END

                     +cmd

FROM   Ordered_Cmd

ORDER BY      Database_Name,

              Table_Name,

              Stats_Name;

 

-- CLEANUP

CLOSE         Databases;

DEALLOCATE    Databases;

DROP TABLE    #commands;

 

Select all the rows from the results pane and you now have a script that you can paste back into the query window, modify, schedule it, and segment it to run by database that will remove all existing auto created stats
from all databases on your server.

NOTE: I recommend running the DROP statements at off-peak hours. These statements may deadlock on schema locks with concurrent sessions. If it happens and you see deadlock errors, just run the script again
to finish the job.

Conclusion

SQL Server creates statistics objects automatically to enable the Query Optimizer to make the right plan choices based on estimated costs. While this is a very useful feature that eliminates manual creation and maintenance of statistics, it can grow to present undesirable overhead on systems in which these auto-created statistics are not periodically culled. The utility script include in this brief paper provides an easy mechanism to assess how many auto-created statistic objects are present, and optionally delete them via the same T-SQL script.

The author would like to thank Tom Huguelet and MVP Douglas
McDowell
for their technical review.

About the author

Ami Levinis a Microsoft SQL Server MVP and a Mentor with SolidQ. He has been consulting, teaching, and speaking about SQL Server worldwide for the
past 15 years.  Ami’s areas of expertise are data modeling, database design, T-SQL and performance tuning.  Follow him on Twitter.

About MVP Monday

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

 

CloudBurst Conference

$
0
0

 

MVPs, Cloud enthusiasts and other industry leaders will gather in Sweden for the 2013 CloudBurst conference on September 19th.  Organized by Windows Azure MVPs Alan Smith and Magnus Mårtensson, the two-day conference will focus on developing Windows Azure applications and real-world cloud-based solutions. 

"CloudBurst shows what the passion and enthusiasm of the Windows azure development community can achieve when focused on delivering something awesome," said Windows Azure MVP Alan Smith.  The entire conference and each sessions will be streaming live and the content will be made available to watch after the conference.

"CloudBurst aims to give the speakers the freedom to deliver great sessions and keep the event cost free to attendees," said Smith

 

Seven MVPs will be presenting during the conference, their session are:

 

Maarten Balliauw

Maarten Balliauw - Windows Azure MVP

Maarten Balliauw is a Technical Evangelist at JetBrains. His interests are all web: ASP.NET MVC, PHP and Windows Azure. He's a Microsoft Most Valuable Professional (MVP) for Windows Azure and an ASPInsider. He has published many articles in both PHP and .NET literature such as MSDN magazine and PHP architect. Maarten is a frequent speaker at various national and international events such as MIX (Las Vegas), TechDays, DPC, ...

How it’s made: MyGet.org

Ever wonder how some applications are built? Ever wonder how to combine components of the Windows Azure platform? Stop wondering and learn how we’ve built MyGet.org, a multi-tenant software-as-a-service. In this session we’ll discuss architecture, commands, events, access control, multi tenancy and how to mix and match those things together. Learn about the growing pains and misconceptions we had on the Windows Azure platform. The result just may be a reliable, cost-effective solution that scales.

[ @maartenballiauw | Blog ]

 

Andy Cross

Andy Cross - Windows Azure MVP

Andy Cross is an author, software consultant, cloud architect and co-owner of Elastacloud. His passion for distributed computing, big data and high performance compute sees him building massively scalable system utilising the limitless potential of the cloud. Andy’s specialism in the Cloud realm is big data, computation distribution, runtime diagnostics and service elasticity. He is a Windows Azure MVP, Insider, co-founder of the UK London Windows Azure User Group and a Microsoft DevPro Community Leader.

Big Data for the Win!!

Liberate yourself from the shackles of limited data processing. Embrace unstructured, semi-structured and structured data. Express your logic in best of breed technologies. Utilise thousands of cores of compute to learn how to win the Internet. Andy will explain different data structures and paradigms that can be effectively used to express them. Using Windows Azure, he will provision and interact with Hadoop, Mahout and Neo4j to mine massive data sets, learn and express meaning from them. Although embracing non-Microsoft technologies, this will be presented from a familiar place; hosted on Windows Azure and consumed with C#.

[ @andybareweb | Blog ]

 

Patriek Dorp, van

Patriek Dorp, van - Windows Azure MVP

Patriek van Dorp is a Technology Consultant for Microsoft technologies at Sogeti in The Netherlands. Over the last 5 years, he has specialized in Cloud Computing and Microsoft’s Cloud Computing platform, Windows Azure. Patriek is passionate about bringing different technical communities with different skill sets together and introducing them to the Windows Azure platform. He co-founded the Dutch Windows Azure User Group this contributed to him receiving the MVP award for Windows Azure for three years in a row now.

Bursting to the Cloud in 1 Hour

Because applications seldom stand on their own, developers are more often facing challenges that cover skills of both Developers and IT Pros. In order to overcome these challenges, both expertises need to evolve towards each other. DevOps are born! This session will show developers how they can extend an existing on-premises infrastructure to the Cloud to facilitate their Cloud applications. The whole process will take no longer than 1 hour and can all be done from the comfort of Visual Studio.

[ @pvandorp | Blog ]

 

Magnus Mårtensson

Magnus Mårtensson - Windows Azure MVP

Magnus Martensson is the first Windows Azure MVP in Northern Europe. In his work as Technical Lead and Software Architect consultant he aids ISVs and other customers to take off and be successful in the Cloud as well as assists with project effectiveness and development process streamlining. Magnus is also a passionate international speaker and trainer and the next gig he would love to book is Yours!

Cloud is the new Black... and all that Jazz!

In this session join a true Cloud enthusiast in pondering the opportunities in the Cloud. Let’s talk about what makes the Cloud unique compared to non-cloud predecessors. We will look at the nature of Cloud Computing – service oriented with a focus on statelessness, low coupling, modularity and semantic interoperability. How does this relate to the Modern App? How do I get going if I’m not a ‘Cloud enabled Developer’ today and is this urgent? We’ll talk about how to embrace the change. The session will underline the concepts with demos of services in the Windows Azure Platform.

[ @noopman | Blog ]

 

Mike Martin

Mike Martin - Windows Azure MVP

Mike is Microsoft Lead Consultant and Architect at Crosspoint Solutions (part of the Cronos group), a company with a strong focus on BI, Data and CRM. Mike is mainly focused on the complete Microsoft product stack and therefore very flexible to work with. He’s been active in the IT industry for over 15 years and has performed almost all types of job profiles, going from coaching and leading a team to architecting and systems design and training. Today he’s primarily into the Microsoft Cloud Platform and Application Lifecycle Management. He’s not a stranger to both dev and IT Pro topics. In January 2012 he became a crew member of AZUG, the Belgian Windows Azure User Group. As an active member he’s both involved in giving presentations and organizing events. Mike is also a Windows Azure MVP and Windows Azure Insider. Mike likes helping out in the community and introducing new & young people into the world of Microsoft and technology.

Windows Azure though the eyes of an IT Pro and how to cope with Developers and Business Stakeholders

The Job of an IT Pro is changing in every aspect. Windows Azure is there as an extend to your domain. Hence Windows Azure even gives you opportunities in areas you might not even have been involved yet. In this session we hand you some practical tips for the SDLC and really focus on those areas that you need to master to better understand the people you have to work with in your everyday environment. Topics here are maintenance and deployment of applications, and the set up of environments.

[ @TechMike2kX | Blog ]

 

Alan Smith

Alan Smith - Windows Azure MVP

Alan is from England and resident in Stockholm where he works as a consultant for Active Solution, a company specializing in Windows Azure and .NET development. He has been in the IT industry since 1995 and delivering training courses since 2005. Between courses he acts as a mentor and coach to other developers and teams, sharing his knowledge of Windows Azure to improve their project development. He also works closely with Microsoft, taking part in technology preview programs, attending software design reviews and providing feedback to development teams. Alan has an active role in the developer community running a local Windows Azure user group as well as playing a leading role in organizing the Global Windows Azure Bootcamp and the CloudBurst conference in Sweden. He is a frequent presenter at conferences and user groups in Sweden and around Europe and has held the Microsoft MVP award for nine years, currently as an MVP for Windows Azure.

Handling Big Data in Windows Azure Storage

Windows Azure Storage provide massively scalable storage services with local and geo-redundancy. Organizations can store terabytes of data off-site and in a reliable, highly available and cost efficient manner. For those of you wishing to explore the capabilities of Windows Azure Storage there are bound to be a number of questions that will arise. Can a blob storage container really handle millions of blobs effectively? How efficient is a Table Storage query when the table contain over a billion entities? How can these data volumes be uploaded reliably and effectively? How can the usage and billing of a data intensive solution be monitored? This demo-intensive session will answer these questions and show how big data scenarios can be effectively implemented using Windows Azure Storage. The Windows Azure Storage API will be covered in depth, along with techniques to increase the speed and reduce the cost of uploading data to Windows Azure. A solution storing the 4.3 million articles that make up the English Wikipedia content and providing a simple text index search will be used as a case study.

[ @alansmith | Blog ]

 

Michael Wood

Michael Wood - Windows Azure MVP

Michael Wood is a Senior Application Developer for Resurgent Capital Services. He describes himself as a problem solving, outdoorsy, user group founding, dog-loving, blog writing, solution creating, event planning, married, technology speaking, father of one kind of guy. When he's not living up to that title he's an avid reader, (horrible) violinist and gamer. Michael is one of the Founding Directors of the Cincinnati .NET User Group in the States as well as the founder of the Cincinnati Software Architecture Special Interest Group. He is also a Microsoft MVP in Windows Azure.

Dependable Cloud Architecture

Felix Baumgartner jumped from over 128,000 feet to break a world record for sky diving. An image was posted to a Facebook account after his landing and in 40 minutes it had logged over 216,000 likes, 10,000 comments, and 29,000 shares. The Pottermore website, digital home to one of the world’s greatest known Wizards, saw a billion (with a “B”) page views just two weeks after it launched and was signing up new users at a rate of 25,000 per day. Whether you need this level of instant scalability, or you are simply wanting to be ready for it when your own idea takes off the cloud allows you to create solutions that can scale easily and have high availability, however, these do not come automatically. You need to know how best to leverage a cloud platform to achieve these capabilities successfully. This presentation focuses on architecture patterns and coding techniques that help provide reliable cloud solutions. While the content for this presentation can be applied to many cloud platforms the examples will be given using Windows Azure.

[ @mikewo | Blog ]

 

 

   

 

 

                                        
           

 

Friday Five - September 20, 2013

$
0
0

1.  Using Microsoft Dynamics CRM 2011 and Internet Explorer 10

     By Dynamics CRM MVP Chris Cognetta –

2.  Cloud-based Load Testing with TFS Service

     By Visual Studio ALM MVP Esteban Garcia - @EstebanFGarcia

3.  Creating path-based site collections in conjunction with your host-named site collections

     By SharePoint MVP Corey Roth - @coreyroth

4.   MVC, Ajax and Relative URLs to Controller Actions

     By ASP.NET/IIS MVP Jay Smith

 5.  VST: Rendering WinForms in Microsoft Dynamics GP web client

     By Dynamics GP MVP Mariano Gomez

BizTalk Integration Development Architecture

$
0
0

Editor’s note: The following post was written by Microsoft Integration MVP Leonid Ganeline

 

BizTalk Integration Development Architecture

 

If company uses BizTalk Server extensively over the years, great number of the BizTalk applications are created and scheduled to creation. These applications might be related of each other, might be independent, but they all work in shared BizTalk environment. If we are not applying some sort of discipline in implementation, the BizTalk deployment would be a complete mess.

BizTalk provides broad spectrum of tools and functions out of box. It forces developers use some patterns and implementation methods. But several functions are not covered by BizTalk and developers have to use some techniques to implement this functionality. Also some functionality is implemented inconsistently throughout BizTalk and developer has many ways to use these features.

Architecture here provides some important limitations on the used techniques hence simplifies the implementation, makes it clear and understandable.

One example is the BizTalk deployment. BizTalk provides many ways to deploy the BizTalk artifacts into run-time environments. Without organizing the deployment process, we can easily reach the point where our BizTalk system would looks like jungle. We could not deploy anything without breaking something, we could not fix any bug without risk to break something, and we could not improve our system without changing something unrelated to our modification.

So the architecture would provide us with limitations and rules. The architecture would provide us with shared technological pieces and shared methods and principles. The architecture is a method to greatly simplify development, deployment and management, it is a method to be more productive and create more reliable solutions.

You can find some architecture information into the BizTalk documentation. You find out several tutorials and good amount of samples. Almost all of them related to the infrastructure architecture, i.e. how to create high available systems with clusters, how to scale out the BizTalk systems, etc. I am going to cover development aspects of architecture.

Why am I using here the Integration term in the BizTalk Integration, not the Project, nor the Solution, nor the Application terms? The Project, Solution and Application terms are used for concrete artifacts, but in this case Integration means several Visual Studio Solutions, several BizTalk Applications, and many-many BizTalk Projects. In this context the BizTalk Integration means all BizTalk Applications on single BizTalk computer or cluster.

Architecture of typical big BizTalk Integration has several generic parts and architecture points. Big Integration means there are more than 10 BizTalk applications which of course are composed from all sorts of artifacts: schemas, maps, orchestrations, pipelines, pipeline components, helper .NET classes, ports, etc.

As it happens in my experience, big BizTalk Integration is usually architected in these key points:

  •          Artifact Composition
    • o   Naming conventions
    • o   Folder structure
    • o   Shared Artifacts
  •       Functionality
    • o   Deployment and Configuration
    • o   Tracing and Logging
    • o   Testing
    • o   Error Handling
  •       Documentation: Design, Projects, Operations
  •       Methodology
    • o   BizTalk technologies in use
    • o   Development patterns
    • o   Integration Patterns

By any means it is not a complete list. In your implementation it might look different and this is OK. It is not a recommended list. It is just a sample of one of possible architecture approach.

 

The Questions

I am going to focus on several architectural aspects which can be expressed by these questions:

What is wrong with this BizTalk feature?

Here I am going to discuss the Cons of a BizTalk feature. Why we have to create the additional artifacts, why don’t use an existed BizTalk feature?

What are the trade-offs?

There are always several solutions of one problem. The best solution always has drawbacks and we should clearly understand them.

How do we want to improve?

Our improvement should be measured clearly and understandable. The success or failure measurement could not be based on vague perception.

How does a typical implementation look like?

The implementation sample and pattern keeps developer on the right road and speeds up development.

Architecture pieces

The Artifact Composition

Generic Rules

  •         A Visual Studio solution defines exactly one BizTalk application. A “Visual Studio solution” and a “BizTalk application” terms can be used in the same places without explicit announcement. Both must use the same name. It is prohibited to spread one solution into several applications or one application into several solutions!
  •         A Visual Studio project defines exactly one .NET assembly. A “Visual Studio project” and a “.NET assembly” terms can be used in the same places without explicit announcement. Both must use the same name. It is prohibited to spread one project into several assemblies or one assembly into several projects!

Naming Conventions

Artifact composition and naming conventions are two sides of one coin, both mirrors the same object hierarchy. We create our architecture model and implement it in the naming conventions. Clear, elegant naming conventions mean clear, elegant model and opposite.

Here I am showing a typical naming convention for the BizTalk Integration. It is little bit boring, so feel free to jump over it. J

Name definitions

Composite names are names composed from several words in Pascal format without separating symbols.

Examples: MySoulution, PatientRecord. 

Artifacts can use full names or logical names. 

Logical names are composite or single-word names with or without separating symbols [.-_/]. A logical name defines a logical entity within logical hierarchy.

Examples: MyCompany.SeattleOffice, MyDomain-DepartmentA, MySolution_Internal, Schemas.SystemX. 

Full names are compounded from logical names separated by separating symbols. Full names express logical hierarchical grouping.

Examples: MyCompany.MyDomain.MySolution.Schemas. 

Naming Conventions Syntax

I use these naming conventions as source information.

Generic Naming conventions:

<Word> =: 
         [A-Z][a-z1-0]*

<CompositeName> =:
         <Word>[._]<Word>[[._]<Word>]

<LogicalName> :=
         <Word>[.<Word>]
         <CompositeName>

<Company> =:
<Domain>    =:
<Solution>    =: 
         <LogicalName>

<Namespace>         =:
         <Company>.<Domain>.<Solution>

Naming conventions for Visual Studio artifacts:

<VSSolutionFullName>        =:
<BizTalkApplication>           =: 
         <Namespace>

<VSProject> =:
<Assembly> =:
<DotNetNamespace> =:
         <Namespace>.<VSProjectLogicalName>

 

<SolutionsRootFolder> =: c:\Solutions

<SolutionFolder>      =:
         <SolutionsRootFolder>\<Namespace>

<ProjectFolder>       =:
         <SolutionFolder>\<Project>

 

<OrchestrationProject>       =:
         <Namespace>.Orchestrations

<SchemaProject>    =:
         <Namespace>.Schemas

<MapProject>          =:
         <Namespace>.Maps

<PipelineProject>     =:
         <Namespace>.Pipelines

<PipelineComponentProject>         =:
         <Namespace>.PipelineComponents

 

<TargetNamespace> =:
         http://<Domain>.<Company>.com/<SolutionLogicalName>/<ProjectLogicalName>/<Version>

<Version> =:
         <date> [in YYYY-MM-DD format]

Naming Conventions for BizTalk Artifacts:

<Orchestration>       =:
         <LogicalName>

<Schema> =:
         <LogicalName>
         <LogicalName>_FF [for Flat File schema]

<Map>      =:
         <SourceSchema>_to_<DestinationSchema> [for one-to-one map]
         <SourceSchema1>_and_<SourceSchema2>_to_<DestinationSchema> [for two-to-one map]

<Pipeline> =:
         <LogicalName>

<Port>      =:
         <Namespace>.<LogicalName>

<ReceiveLocation>   =:
         <Port>.<TransportType>

Shared Artifacts

This picture does not embrace all BizTalk artifacts but only several most used artifacts.

It shows relations between the BizTalk artifacts:

 

As you can see schemas are the most used and the most elementary artifact. Maps, orchestrations, pipelines use schemas not the opposite. Schemas play one of the most important roles in integration. The distributed systems integration is based on contracts and XML schema is one of the implementation of a contract. If we share only schemas, our artifacts will be loose coupled.

BizTalk and Integrated Systems exchange data usually in strictly defined formats, the contracts. Once defined the contracts should be kept stable and rarely be changed.

This concept forces us to pay a special attention to the schemas in the integration architecture. Also this forces us to use several simple rules:

  •          Place schemas in a separate project;
  •          Place independent schemas in separate projects.

Let’s assume the BizTalk application integrates two systems, A and B. Our rules force us to create at last three schema projects: SchemasA, SchemasB, SchemasInternal. AssemblyA (with SchemaA) is exposed to the BizTalk application and to the system A; AssemblyB (with SchemaB) is exposed to the BizTalk application and to the system B; AssemblyInternal (with schemas used only internally) is accessible only for the artifacts of this BizTalk application.

One integrated system in the big BizTalk Integration can be integrated with several BizTalk applications. That is why it is hard to predict if the schemas will be used in one or in several applications.

Makes sense to place all schemas, exposed by integrated systems, to a separate solution. This creates one place where contracts are stored and managed. We will keep our guns in the arsenal under good control.

Here is an example of such solution in the real-life solution:

It has the Shared.Schemas name.

Folder Structure

Folder structure should follow the solution structure. Here is an example for the Shared.Schemas solution folder structure:

 

The detailed discussion of the Artifact Composition will be in the next article, “the BizTalk Integration Development Architecture: Artifact Composition”.

Documentation

Documentation is an essential part of the architecture for big BizTalk Integration. 

Documentation is a single entry point for all interested people including developers, administrators, and stakeholders.

Big BizTalk Integration is created not as a single project but as several independent projects developed by different teams, with different goals, different requirements, and different skills. It is important to perform the architecture reevaluation of the code base at last once a year. The main part of this job is to keep documentation up to date.

Good documentation means the integration is kept on the architect radar. It means the stakeholders know the present and the future of company integration.

Bad documentation means bad management, lack of resources, and lack of perspective.

I prefer to use hierarchical document model. Document model mirrors the architectural model. A single document, “The BizTalk Deployment Architecture” is on the top of the hierarchy.  The document structure has several main parts: the Design documents, the Operation documents, and the Project documents.

Design documents map the main chapters of this document.

Design documents:

  •          Naming Conventions
  •          The Artifact Composition
  •          Deploying
  •          Tracking and Logging
  •          Error Handling
  •          Testing
  •          Integration Patterns
  •          Development Patterns
  •          BizTalk Technologies in Use
  •          Other Design Information

Operation documents are usually heavily influenced by the corporate standards. So the following document list will probably different for you.

Operation documents:

  •          Monitoring
  •          Settings, Tuning up
  •          Maintenance
  •          Disaster Recovery
  •          Other Procedures

A project document describes only information specific for this project. All generic information should be placed in Design or Operation documents. It is not permitted to place the generic information inside the project documents.

Project documents:

  •          <ProjectName> project
  •         

The document templates will be provided in one of the next article, “the BizTalk Integration Development Architecture: Documents”.

Functionality

Detailed Functionality discussion will be provided in one of the next article, “the BizTalk Integration Development Architecture: Functionality”.

Deployment and Configuration

There are many different objects (artifacts) in BizTalk Server with different deployment process. Deployment for orchestrations, business rules, BAM tracking profiles, ports, pipeline components, .NET helper classes, configuration parameters is not the same.

Deployment order is very important. If applications share some artifacts, small fix can force you to redeploy not just single assembly but several of your applications.  

Deployment in the multi-server installation adds complexity.

Deployment in several environments (Development, Test, Staging, and Production) makes configuration even more complex.

For big BizTalk Integration we have now only one choice, the BizTalk Deployment Framework (BTDF). It is a free tool on Codeplex, created by Scott Colestock and Thomas Abraham. This great tool puts together all deployment steps and enormously simplifies deployment and configuration.

It is the mandatory tool for the big BizTalk Integration. Many architectural choices here are made because of the BTDF. For example, BTDF supports log4net and NUnit out-of-box, so these utilities are natural choice.

Tracking and Logging

These two terms are very alike in a BizTalk environment. Logging is different in a way it works all the time and registers important events as the host instance starts or stops. Tracking is something that developers include in code to track information useful for debugging. Logging is small and fast and does not harm performance; it is usually turned on all the time. Tracking is heavier; it produces more information and harms performance. It can be turned on only in special occasions like bug investigations or testing.

I agree, it is not a classic definition of logging and tracking, it is how I use it here.

BizTalk Server implements logging internally for many important events like errors or changing state of run-time artifacts. We cannot manage this logging and there isn’t any API to control it. This internal logging is not the consistent across the events, does not have consistent format. So it is hard to use it for everyday monitoring without additional custom log viewers.

There is a massive implementation of tracking in the BizTalk Server. We can turn on tracking for almost all run-time artifacts, as orchestrations, ports, pipelines. Tracking data consists mostly from the messages and message context. Tracked date stored into tracking databases. Tracking coverage is broad. The bad things about it in BizTalk are the BizTalk doesn’t have good tools to dig through the tracking data, it doesn’t have an API to access the tracking data, and tracking harms performance badly. So usually BizTalk tracking is turned on only for short period of time and looking though the tracked data is slow process and requires long learning. As a result the BizTalk tracking is not good for development and testing but only for the bug investigations.

One of the possible solutions for tracking and logging is to use standard logging library like Enterprise Library Logging block, or NLog, or log4net. The EL is not competitor here because of several drawbacks. It is the most complex package in our list, it is the slowest, and the most important drawback is the EL is used inside the ESB Toolkit which creates versioning problems. NLog is the most contemporary library. But log4net is chosen because it is supported by the BizTalk Deployment Framework.

Log4net is used for both tracking and logging. Only limited set of events is registered in logging mode, the events that are important for the everyday monitoring and error handling. In tracking mode additional information is registered including message bodies and message flow. Log4net output is easily tuned up. Development effort to use log4net is minimal.

Logging data is written into the log files and into the Windows event log for the data auditing, into notification emails for immediate error handling.

Tracking data is written additionally into the debug output for testing and debugging.

Error Handling

Error sources

Errors raised in BizTalk Server in several main places: orchestrations and ports. Other artifacts are usually included in orchestrations or in ports, so an error in a map, for example, will be raised by an orchestration or by a port, which nests this map.

  •          Ports: here is a standard way to handle errors on ports. We set up the “Enable routing for Failed Messages” option on the ports. This promotes several special context properties on the failed message. We use these properties to subscribe to the failed messages. A typical error handling is to create a failed message subscriber: a send port or an orchestration, which receives failed messages, created a well-formatted notification and send it as a record to the database and/or an email to the interested parties. Without such this option set up, the failed messages are suspended.
  •          Orchestration: this case a little bit more complex. Whey error raised, we are usually interested in all existed messages of this orchestration. If a port fails there is only one message, if an orchestration fails there can be many messages in failed orchestration. It would be great, if we compound all those messages and send them as single message to the error subscribers.
    The BizTalk Server has the ESB Toolkit with special Exception Handling features. Using ESB Toolkit API we can attach all messages to single fault message and publish it. Processing of this message can be the same as for ports or using ESB Toolkit again. Fault message (a single fault message or a fault message with several attached messages) can be saved in the ESB Toolkit Exception database and processes on the Exception Portal.

We have to decide which way is preferable: create the custom error handling code or use ESB Toolkit. 

The log4net utility is also used for logging the errors and raising the error notifications.

Before start to implement error handling we should classify our errors and decide how to process different sorts of errors.

Testing

BizTalk has several options for the functional testing. There is a good tooling for the map testing. But there is nothing for the application testing. Typical testing done with additional file send test ports and file receive locations. Unfortunately there is any two-way file port which makes challenging the testing for the two-way endpoints like most of the WCF and enterprise ports.

There are several options to create tests for the BizTalk applications, the Visual Studio test functionality and the old NUnit, which is luckily supported by BTDF.

Visual Studio test functionality is abundant. Cons of it, it is hard to deploy test harnesses from Development environment to other environments. It creates too many relations in the deployment package and too many pieces should be deployed. This is a real problem for the Production environment where the Visual Studio is not installed.

NUnit requires only couple additional libraries, which is the key argument for using NUnit. Moreover NUnit is luckily supported by the BizTalk Deployment Framework. So our decision is to use NUnit.

There is also additional BizUnit library. BizUnit is good test library but it is not mandatory. The newest .NET Framework libraries cover most of the BizUnit features. One drawback with BizUnit is, the BizUnit documentation is scarce and you spent a lot of time learning.

It is hard to implement continuous integration in the BizTalk Integration but it is desirable.

Methodology

Detailed Methodology discussion will be provided in one of the next article, “the BizTalk Integration Development Architecture: Methodology”.

About the author

 

With 9+ years BizTalk Server experience Leo is working as a BizTalk Developer, Architect, and System Integrator. 

He got awards: The Microsoft Most Valuable Professional [MVP] Awards 2007, 2008, 2009, 2010, 2011, and 2012 in BizTalk Server; The Microsoft MVP Award
2013 in Microsoft Integration. 

Leo is a Moderator of the BizTalk Server General forum on the Microsoft MSDN site, he is a blogger [http://geekswithblogs.net/LeonidGaneline/ and http://ganeline.wordpress.com/] and
author of the Microsoft TechNet articles and MSNS Gallery/Samples.

 

About MVP Monday

 

 

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

MVPs Present at the 2013 CloudBurst Conference

$
0
0

Cloud enthusiasts from around the world sat in, logged on and asked questions as MVPs presented at the 2013 CloudBurst conference.  Organized by MVPs Alan Smith and Magnus Mårtensson, the two-day conference sought to provide real-world content for Windows Azure developers and those wanting to explore the platform. 

"Good speakers, interesting topics and relevant sessions,” were among the buzz phrases during the conference.

CloudBurst demonstrates the passion and enthusiasm of MVPs and the Windows Azure development community.  One of the goals of the event was to provide the speakers the freedom to deliver great sessions and keep the event cost free for attendees. 

Videos for CloudBurst are (being) published as an Event on Channel9.

You can find the photos from the conference here

 

Congratulations to the following MVPs for their contributions to the 2013 CloudBurst conference:

Maarten Balliauw

Maarten Balliauw - Windows Azure MVP

Maarten Balliauw is a Technical Evangelist at JetBrains. His interests are all web: ASP.NET MVC, PHP and Windows Azure. He's a Microsoft Most Valuable Professional (MVP) for Windows Azure and an ASPInsider. He has published many articles in both PHP and .NET literature such as MSDN magazine and PHP architect. Maarten is a frequent speaker at various national and international events such as MIX (Las Vegas), TechDays, DPC, ...

How it’s made: MyGet.org

Ever wonder how some applications are built? Ever wonder how to combine components of the Windows Azure platform? Stop wondering and learn how we’ve built MyGet.org, a multi-tenant software-as-a-service. In this session we’ll discuss architecture, commands, events, access control, multi tenancy and how to mix and match those things together. Learn about the growing pains and misconceptions we had on the Windows Azure platform. The result just may be a reliable, cost-effective solution that scales.

[ @maartenballiauw | Blog ]

 

Andy Cross

Andy Cross - Windows Azure MVP

Andy Cross is an author, software consultant, cloud architect and co-owner of Elastacloud. His passion for distributed computing, big data and high performance compute sees him building massively scalable system utilising the limitless potential of the cloud. Andy’s specialism in the Cloud realm is big data, computation distribution, runtime diagnostics and service elasticity. He is a Windows Azure MVP, Insider, co-founder of the UK London Windows Azure User Group and a Microsoft DevPro Community Leader.

Big Data for the Win!!

Liberate yourself from the shackles of limited data processing. Embrace unstructured, semi-structured and structured data. Express your logic in best of breed technologies. Utilise thousands of cores of compute to learn how to win the Internet. Andy will explain different data structures and paradigms that can be effectively used to express them. Using Windows Azure, he will provision and interact with Hadoop, Mahout and Neo4j to mine massive data sets, learn and express meaning from them. Although embracing non-Microsoft technologies, this will be presented from a familiar place; hosted on Windows Azure and consumed with C#.

[ @andybareweb | Blog ]

 

Patriek Dorp, van

Patriek Dorp, van - Windows Azure MVP

Patriek van Dorp is a Technology Consultant for Microsoft technologies at Sogeti in The Netherlands. Over the last 5 years, he has specialized in Cloud Computing and Microsoft’s Cloud Computing platform, Windows Azure. Patriek is passionate about bringing different technical communities with different skill sets together and introducing them to the Windows Azure platform. He co-founded the Dutch Windows Azure User Group this contributed to him receiving the MVP award for Windows Azure for three years in a row now.

Bursting to the Cloud in 1 Hour

Because applications seldom stand on their own, developers are more often facing challenges that cover skills of both Developers and IT Pros. In order to overcome these challenges, both expertises need to evolve towards each other. DevOps are born! This session will show developers how they can extend an existing on-premises infrastructure to the Cloud to facilitate their Cloud applications. The whole process will take no longer than 1 hour and can all be done from the comfort of Visual Studio.

[ @pvandorp | Blog ]

 

Magnus Mårtensson

Magnus Mårtensson - Windows Azure MVP

Magnus Martensson is the first Windows Azure MVP in Northern Europe. In his work as Technical Lead and Software Architect consultant he aids ISVs and other customers to take off and be successful in the Cloud as well as assists with project effectiveness and development process streamlining. Magnus is also a passionate international speaker and trainer and the next gig he would love to book is Yours!

Cloud is the new Black... and all that Jazz!

In this session join a true Cloud enthusiast in pondering the opportunities in the Cloud. Let’s talk about what makes the Cloud unique compared to non-cloud predecessors. We will look at the nature of Cloud Computing – service oriented with a focus on statelessness, low coupling, modularity and semantic interoperability. How does this relate to the Modern App? How do I get going if I’m not a ‘Cloud enabled Developer’ today and is this urgent? We’ll talk about how to embrace the change. The session will underline the concepts with demos of services in the Windows Azure Platform.

[ @noopman | Blog ]

 

Mike Martin

Mike Martin - Windows Azure MVP

Mike is Microsoft Lead Consultant and Architect at Crosspoint Solutions (part of the Cronos group), a company with a strong focus on BI, Data and CRM. Mike is mainly focused on the complete Microsoft product stack and therefore very flexible to work with. He’s been active in the IT industry for over 15 years and has performed almost all types of job profiles, going from coaching and leading a team to architecting and systems design and training. Today he’s primarily into the Microsoft Cloud Platform and Application Lifecycle Management. He’s not a stranger to both dev and IT Pro topics. In January 2012 he became a crew member of AZUG, the Belgian Windows Azure User Group. As an active member he’s both involved in giving presentations and organizing events. Mike is also a Windows Azure MVP and Windows Azure Insider. Mike likes helping out in the community and introducing new & young people into the world of Microsoft and technology.

Windows Azure though the eyes of an IT Pro and how to cope with Developers and Business Stakeholders

The Job of an IT Pro is changing in every aspect. Windows Azure is there as an extend to your domain. Hence Windows Azure even gives you opportunities in areas you might not even have been involved yet. In this session we hand you some practical tips for the SDLC and really focus on those areas that you need to master to better understand the people you have to work with in your everyday environment. Topics here are maintenance and deployment of applications, and the set up of environments.

[ @TechMike2kX | Blog ]

 

Alan Smith

Alan Smith - Windows Azure MVP

Alan is from England and resident in Stockholm where he works as a consultant for Active Solution, a company specializing in Windows Azure and .NET development. He has been in the IT industry since 1995 and delivering training courses since 2005. Between courses he acts as a mentor and coach to other developers and teams, sharing his knowledge of Windows Azure to improve their project development. He also works closely with Microsoft, taking part in technology preview programs, attending software design reviews and providing feedback to development teams. Alan has an active role in the developer community running a local Windows Azure user group as well as playing a leading role in organizing the Global Windows Azure Bootcamp and the CloudBurst conference in Sweden. He is a frequent presenter at conferences and user groups in Sweden and around Europe and has held the Microsoft MVP award for nine years, currently as an MVP for Windows Azure.

Handling Big Data in Windows Azure Storage

Windows Azure Storage provide massively scalable storage services with local and geo-redundancy. Organizations can store terabytes of data off-site and in a reliable, highly available and cost efficient manner. For those of you wishing to explore the capabilities of Windows Azure Storage there are bound to be a number of questions that will arise. Can a blob storage container really handle millions of blobs effectively? How efficient is a Table Storage query when the table contain over a billion entities? How can these data volumes be uploaded reliably and effectively? How can the usage and billing of a data intensive solution be monitored? This demo-intensive session will answer these questions and show how big data scenarios can be effectively implemented using Windows Azure Storage. The Windows Azure Storage API will be covered in depth, along with techniques to increase the speed and reduce the cost of uploading data to Windows Azure. A solution storing the 4.3 million articles that make up the English Wikipedia content and providing a simple text index search will be used as a case study.

[ @alansmith | Blog ]

 

Michael Wood

Michael Wood - Windows Azure MVP

Michael Wood is a Senior Application Developer for Resurgent Capital Services. He describes himself as a problem solving, outdoorsy, user group founding, dog-loving, blog writing, solution creating, event planning, married, technology speaking, father of one kind of guy. When he's not living up to that title he's an avid reader, (horrible) violinist and gamer. Michael is one of the Founding Directors of the Cincinnati .NET User Group in the States as well as the founder of the Cincinnati Software Architecture Special Interest Group. He is also a Microsoft MVP in Windows Azure.

Dependable Cloud Architecture

Felix Baumgartner jumped from over 128,000 feet to break a world record for sky diving. An image was posted to a Facebook account after his landing and in 40 minutes it had logged over 216,000 likes, 10,000 comments, and 29,000 shares. The Pottermore website, digital home to one of the world’s greatest known Wizards, saw a billion (with a “B”) page views just two weeks after it launched and was signing up new users at a rate of 25,000 per day. Whether you need this level of instant scalability, or you are simply wanting to be ready for it when your own idea takes off the cloud allows you to create solutions that can scale easily and have high availability, however, these do not come automatically. You need to know how best to leverage a cloud platform to achieve these capabilities successfully. This presentation focuses on architecture patterns and coding techniques that help provide reliable cloud solutions. While the content for this presentation can be applied to many cloud platforms the examples will be given using Windows Azure.

[ @mikewo | Blog ]

 

 


Getting started with Coded UI Testing for your Windows 8.1 Store Apps

$
0
0

 

Editor’s Note: In partnership with Microsoft Press, MVPs have been contributing to an ongoing guest series on their official team blog. Today’s article is from Visual Studio ALM MVP Mike Douglaswhich is the 30th in the series.

I recently created and published my first Windows 8 Store application to the store.  I created it using XAML/C# and followed the MVVM pattern.  As you can imagine I learned a number of things throughout the process.  Now that it is done, I want to rewrite the underlying codebase using those lessons learned.  So how can you rewrite the code but ensure that none of the functionality breaks?  Unit tests provide a great way to regression test your application.  However, adding dependency injection and better unit tests is one of the things I need to do as part of the rewrite, so this won’t provide me the test coverage to ensure that my application is functioning properly.  In addition, the unit tests will only test the individual components and not be able to test all of the layers and ensure the UI is functioning properly.   Coded UI Testing has traditionally provided the mechanism for creating functional tests that tests all of the layers the can be used for regression testing.  With Windows 8 store apps however, Coded UI Tests were not supported and I wouldn’t be able to verify the gestures in my application.

Now with the release of Windows 8.1 and Visual Studio 2013, Coded UI Tests can provide automated functional testing against the UI including not only the mouse and keyboard inputs, but also to verify that the gestures react properly. 

How do I get started?

The first two things you need are Windows 8.1 and Visual Studio 2013.  Using only one of these will not work.  Also, Coded UI Tests for Windows 8.1 only supports XAML based applications.  If your application was written for Windows 8.0, you need to retarget it to Windows 8.1.  This is done very easily by opening the project in Visual Studio 2013, right clicking on the project, and selecting Retarget to Windows 8.1.  For more information on retargeting your application, see the MSDN article Retarget your Windows Store app to Windows 8.1.

 

 

 To read the full article, click here

 

 

Friday Five - October 25, 2013

$
0
0

 

1. TFS 2013 - What's New in Web Test Management

     By Visual Studio ALM MVP Anna Russo

2. Updating the Microsoft TFS 2012 and Project Server 2013 VM to TFS 2013

     By Visual Studio ALM MVP Steve St. Jean

3. Managing DAG in Exchange Server 2013 – The Series

     By Exchange Server MVP Anderson Patricio

4. SQL Server Compact 4 desktop app with simple Private Deployment and LINQ to SQL

     By SQL Server MVP Erik Ejlskov

5. Display the Full Touch Keyboard on Your Surface

     By Windows Entertainment and Connected Home MVP Barb Bowman

 

.

BizTalk Integration Development Architecture: Artifact Composition

$
0
0

Editor’s note: The following post was written by Microsoft Integration MVP Leonid Ganeline

BizTalk Integration Development Architecture: Artifact Composition

This is the second in series of articles.

The first article is “BizTalk Integration Development Architecture”.  

Artifact composition describes several aspects of the BizTalk Integration architecture:

  • Naming conventions
  • Folder structure
  • Shared Artifacts

Change Management, Update Rule

Integration is a never-ended endeavor. The BizTalk Integration embraces many unrelated projects, created by different developer teams with different skill sets and with different requirements. It is almost impossible to create architecture rules which could survive all this way.  One smart way is to hire the most experienced developer/architect who creates a good starting rule set and architecture patterns, so it would embrace the best existed practices as soon as possible. But the changes and modifications are inevitable.

One way to fight this problem is a rule about updates. J

Is it a problem or not? Let’s see an example:

The BizTalk Integration includes 50 BizTalk applications which were created through 10 year development effort. The 25th application was the Notification application which is used as a notification hub for all other applications. All 25+th new applications use this Notification application. But the old applications do not use it. They use several “unconventional” notification methods.

Now one of these old applications should be modified for some unrelated reasons. Resources were found for the modification. What about this Notification hub? Should it be a part of this modification or should not?

The Update rule could be:
Each application update should include all architecture updates which were happened between now and the last application update.

It sounds familiar. This rule mimics the definition of the service pack which usually includes all up-to-date fixes. With this rule we keep the applications in good shape, we easily track the legacy applications, we track the up-to-date fixes.

Tip: Keep a table with applications as rows and the architecture updates as columns, and we always know the current state of updates. 

It could be some different rule. For example, the architecture update should be implemented immediately on all applications which satisfy some criteria.

Any Update rule is much better than no rule at all. When we develop a standalone application we don’t know if this application version is the last version or the application is going to live for years with many updates on this way. But the BizTalk Integration is always never-ended process, so the Update rule is a good candidate for a good architecture.

Naming conventions

Naming Conventions are placed in a separate document. Please, use these naming conventions as source information.

Folder Structure

The folder structure copycats the artifact hierarchy. Because the namespaces also copycat the artifact hierarchy, the folder structure looks like the namespace structure.

The namespace is the better formal representation of the hierarchy, so I copy the folder structure from the namespaces not the opposite way.

The main rule is:
Folder structure and folder names should be a copy of a namespace or the namespace pieces [a copy of the artifacts saved into the folders].

Seems complex, but it is simpler if we use pictures to demonstrate this rule.

 

The name of the project could be just Schemas, but the full project namespace as a name is better. Here two rules are “fighting” for dominance.

One rule is: names of a project, a project assembly, a project namespace, and a project folder should be the same.

The second rule is: Folder name should be equal to the correspondent part of the namespace.

The first rule is a winner.

So we use the folders names: GLD, Samples, and Template for all projects which namespace started with “GLD.Samples.Template”. But for the Schema project folder we use “GLD.Samples.Template.Schemas” not “Schemas”.

If we group several projects inside single solution, we follow the same rule for the additional group folder names.

For example, we group the projects inside the Shared.Schemas solution by a system name, which is CRM or Internal or SAP or Shipping. So we create subfolders for each group:

 

Do not forget to use the same rule for the solution folders inside Visual Studio:

 

This rule is very important for the big BizTalk Integration. Imagine a hundred applications and a hundred Visual Studio solutions. You are going to investigate and fix an error. All you know is the assembly name, it is GLD.Samples.Shared.Classes.Shipping. If you are a developer, you instantly know that you have to open the GLD.Samples.Shared.Classes Visual Studio solution to look to the code. If you are an administrator, you instantly know that you have to check the GLD.Samples.Shared.Classes BizTalk application. If you the source code manager, you instantly know where this code is saved on TFS or Git.

If you have ever searched an assembly GLD.Samples.Shared.Classes.Shipping and finally find out code in the GLD.Samples.Sap project which was placed in the GLD.Shipping solution, you completely spoil your time. If you start to investigate the assembly relations in such mess, you are completely doomed.

Shared Artifacts

To Share or Not To Share?

There are several aspects. We are considering this question from different points of view: development, deployment, operations.

Too many shared artifacts results in too complex relations between assemblies, projects and applications. Artifacts should be shared only for a good reason.

The BizTalk Server controls the relations between assemblies in very strict rules. One of this rule is: If we want to redeploy a shared assembly, we have to previously undeploy all assemblies which reference our shared assembly. That rule is super important in the big BizTalk Integration.

Development usually pushes us to share artifacts to use more compact and organized code.

Deployment, from the other side, pushes us to denormalize code, because shared code complicates deployment and redeployment.

Please, consider both sides before starting to implement a shared component.

If we share, we reuse, which couples code. Changes in shared artifacts could trigger changes in the sharing artifacts.

If we don’t share, we broke relation between artifacts. Changes in isolated artifacts do not change anything outside those artifacts.

It is possible to share without creating relation between artifacts, if we share template, pattern, idea, not the code, not the artifact itself.

Schemas

As discussed a schema is a very special artifact. It plays the contract role in the BizTalk applications. Services in SOA architecture do not share code, they share contracts. For BizTalk development this means we share schemas between systems, we share schemas between applications.

The schemas are good candidates for sharing. Quite few schemas are designed not to be shared and used only for internal purposes.

So, if schemas belong to an external system are generated by an [adapter] wizard, they are/will be shared in the most cases. Place them in a Shared.Schemas application under a separate project.

If schemas are canonical schemas, place them in a Shared.Schemas application in the Canonical project group under a separate project.

If schemas are shared only inside single application, they could be placed in a separate project of the current solution. If all application artifacts are placed in one project, the schemas also could be placed in this project.

Let’s see a real-life example.

Shared Schemas Architecture Example

 

Application AtoB was created the first. It transfers data between systems A and B. Next was the AtoC application and it exchanges data between systems A and C. Both applications use the Sch_A schema. BizTalk uses schema not in the application namespace but in the global namespace. That means BizTalk cannot recognize a schema if it was deployed in several applications. A schema should be deployed only once, if we want to use this schema in the ports. This rule protects us from severe design errors but it also forces us to share schemas.

For our example that means, we cannot deploy Sch_A into both applications. The naïve approach is to reference AtoB.Process assembly from the AtoC.Process. I mentioned this placing Sch_B in the brackets on the picture.

So far so good.

Then we found out a bug in AtoB application, not in the Sch_A but in Orch_1. We fixed it and want to deploy a new AtoB.Process assembly. No so good, because first we have to undeploy AtoC application because it references the AtoB. That is not good. Now AtoC is related to any modifications in AtoB not only to the Sch_A.

 

 

Let’s fix it. We extracted schema Sch_A into a separate project/assembly. Now we don’t have to redeploy AtoC.Process when we change something in Sch_B or in Orch_1, but only if we change Sch_A.

But it still doesn’t look right. From the design standpoint the Sch_A does not belong to the AtoB application. It does not belong to the AtoC application either. Both applications us it, but it is an independent of them. Only system A dictates how the Sch_A schema looks like. Owners of this system can change it, not the owners/developers of the AtoB and AtoC applications.

 

Let’s change our design to show this ownership. A new Shared.Schemas application was created, which holds all schemas that belong to the integrated systems. Our Sch_A was placed into a separate, independent projects. Now AtoB and AtoC both reference this Shared.Schemas.A assembly.

Isn’t is something wrong? How do our changes simplify our development, deployment, operations? Looks like now we have one more application and one more reference (from AtoB to the Shared.Schemas.A). How it could be simpler?

The key word here is “changes”. If our applications will never be changed we don’t need these “improvements”. Actually we don’t need AtoB and AtoC but just one application holding all artifact.

But when we start to modify our applications we immediately start to understand that “loose coupling” is not just funny words.

Next step is to ask a question “why is the Sch_A schema so special and not the Sch_C and Sch_B schemas”? All those schemas belongs to the integrated systems not to the integrating applications.

 

Let’s change our design to fix this. Now schemas of all systems are placed into separate assemblies. Moreover, we found out these systems have (or could have) several interfaces, not only a single interface, hence we got several assemblies for each interface. I use an interface term here as a separate data contract/schema. For sure, if it is a two-way interface (as request-response), both the request and response schemas belong to the same assembly.

Again, the resulted design looks more complex but it is more appropriate for the real life. And it models the real relations.

Next step in our design is to mention that we integrated two systems, B and C, with one system A. What happens if we add one more system or replace one of the system? Seems the canonical data model fits here perfectly.

 

Let’s create a canonical schema for this interface (I_1 interface)and link the systems through this canonical schema. Now each application deals exactly with one system. Each application deals with implementation details of this single system. Changes in this system will not force us to modify another applications. Before that if system A does change its interface, we have to modify both AtoB and AtoC applications. Now, we change only X_to_Canonical1. I intentionally changed the A application name to the X, to show, that we can easily add Y, Z, etc. systems to the picture without changing any other application.

Canonical data model is not a universal, required design pattern. It works if we have one-to-many or many-to-many integration interfaces between systems. It doesn’t make sense to use it if we have one-to-one interface.

Here is a template Shared.Schemas application code which can save us some precious development time.

.NET classes

.NET classes can be shared between the BizTalk applications. It doesn’t create the “redeployment hell” as it could happen with sharing other BizTalk artifacts: schemas, maps, orchestrations, pipelines, etc.

If we need to modify a .NET assembly, we just re-GAC it. This does not require to undeploy all related assemblies.

One recommendation: Do not share .NET class right away. Try it in one application as a local class for this application. Try it in the next application. When this class is stabilized, when you feel it will not be changed too much in the future, extract it in a shared project. Usually those shared assemblies are placed inside one Shared.Classes application.

Here is a template Shared.Classes application code which can save us some precious development time.

Maps, Orchestrations, Pipelines, etc.

Other BizTalk artifacts, not the schemas, are not good for sharing.

We share the pipelines, maps, orchestrations, rules only in very special occasions, usually as the components of the BizTalk shared infrastructure. One example is a notification service that manages notifications: formats notifications, filters notifications, sents them as emails, SMS-s, twitts, etc. Always consider to share artifacts as a service.

Maps are usually totally local to the application. It is a really bad thing to share maps between applications from the design point of view.

If you want to share the orchestration, do not share the orchestration assembly between applications. Share it as a service.

If you call or start an orchestration from another application, you have to reference the orchestration assembly. So consider to use another architecture pattern, like direct binding. That means the calling orchestration just publishes (sends) messages and the called orchestrations subscribe to these messages. If you want to pass the additional parameters with message use the message context properties. The message context also can be used to create the “custom binding” when subscriptions have additional predicates which match for example the originator orchestration name.

The pipelines also should not be shared between applications. But the pipeline components can be shared.

 

 

About the author

With 9+ years BizTalk Server experience Leo is working as a BizTalk Developer, Architect, and System Integrator. 

He got awards: The Microsoft Most Valuable Professional [MVP] Awards 2007, 2008, 2009, 2010, 2011, and 2012 in BizTalk Server; The Microsoft MVP Award
2013 in Microsoft Integration. 

Leo is a Moderator of the BizTalk Server General forum on the Microsoft MSDN site, he is a blogger [http://geekswithblogs.net/LeonidGaneline/ and http://ganeline.wordpress.com/] and
author of the Microsoft TechNet articles and MSNS Gallery/Samples.

 

About MVP Monday

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

 

Viewing Your Code Through Visual Studio’s CodeLens

$
0
0

Editor's note: The following post was written by Visual Studio ALM MVP Richard Banks

Viewing your code through Visual Studio’s CodeLens

We’ve seen it all when it comes to code, right? Good code, bad code, code with tests, code without tests, clean code, spaghetti code and code that makes you scratch your head and wonder just what someone was thinking when they wrote it.

Wouldn’t it be nice if Visual Studio let you know if code was unit tested or not? What if it could let you know a change was going to break large amounts of other code? What if it could give you the back story on why a particularly nasty piece of code turned out the way it has?

Enter Visual Studio 2013’s new CodeLens; a tool to help you answer all those questions.

CodeLens is a like window into your code, a lens if you will, showing you what you cannot see when just looking at the code on your screen. Think of it like your car’s dashboard, giving you unobtrusive, at your fingertips, information as you drive your IDE. Never more than a glance away.

To show you why this is such a brilliant feature let’s ask some typical developer questions and see how CodeLens helps us answer them. By the end you should have a good understanding as to why CodeLens is such an invaluable addition to Visual Studio 2013 and a must-have feature.

 

“If I change this code, what might I break?”

You want to make a change to some code, however you’re uncertain what else will be affected if you do or if you’ll end up chasing the ripples of your change throughout your entire code base. It’s important to know the answer as it affects the size of the work in front of you.

With CodeLens you can quickly see an incoming reference count for the code you’re looking to alter and can quickly assess the first stage impact of any change.

To give you an example of CodeLens in action we’re going to use the excellent RestSharp open source project. Here we see the CodeLens dashboard showing us that the IRestClient interface has 54 references to it:

 

 

 

Clicking the reference count opens the CodeLens window showing all the places where references exist in the solution. Hovering over a reference will show a small code snippet of the calling code giving you more context when needed.

 

It’s not just types that this applies to, either. The same behavior applies to methods and properties as well.

As you might expect, double clicking a reference will navigate you to the referring code. From a usability perspective, when you double click a reference in the popup to navigate to a reference the popup will close. What if you wanted to keep those references handy so you could look at a number of them individually? Clicking the dock icon in the top right of the popup will close the popup and add it as a docked window so that the information remains available for you as you navigate through your code.

 

What if you want to get a feel for how far a change might spread? Looking at just the first level of references is OK, you can use the Show on Code Map link to visualize all the incoming code connections at once.

From an understanding perspective, if you double click a node in the code map you will navigate to the calling method. If you then open the references indicator for the caller and click the Show on Code Map link, its own references will be added to the code map allowing you to rapidly build up a great visualization of how far a change might spread. How good is that!

 

 

“If I change this, will it still work? How can I be sure?”

Those of you following a test driven approach to development may have already answered this question with “Because the tests will break!”

While that might be true for some of us, many of us live with test suites that are, well, “limited” in terms of what they cover. Making the question harder to answer is that without easily seeing what the tests we have actually call it’s hard to know if the code you want to change is executed by a unit test or not.

CodeLens helps answer that question by showing a count of tests (as discovered by the Test Explorer) that reference your methods along with their current success status.

 

In this example there are two tests calling the ExecuteAsync method, with the tests yet to be run. Clicking Run All will run the displayed tests, not all the tests in the solution, allowing you to just run the tests that are impacted by your change. Of course, once those tests are passing you should then do a full test run prior to checking in your code.

When a test run finishes, CodeLens will show the updated results. Here I’ve deliberately broken a test so you can see the different success statuses:

 

Wonderful! No more excuses, people. Write those tests!

 

“Am I looking at the latest code?”

Ever looked at a method, made some changes to it and then realised it wasn’t the most recent code you changed? Yeah, me too.

CodeLens helps you answer the latest code question by asynchronously querying Team Foundation Server when it loads its indicators. It checks if there are more recent changesets on the server than what you’re working on and will show you when this is the case.

To activate this feature your code will need to be in a TFS 2013 team project using Team Foundation Version Control. The CodeLens indicator will look as follows:

 

The indicator 6+1 changes is showing there is a changeset on the server more recent than my local workspace. Clicking the indicator will show the list of changes including the author and when they were made.

In this case we also see the IronProgrammer has linked their changeset to a work item, providing even more context about their change and further aiding understanding.

 

“Why is this code like it is?”

Being able to see the changes for a piece of code along with related work items and code reviews provides fantastic context in determining how code came to be as it currently is. If that information isn’t enough for you, you can double click any of the items to be taken to the relevant TFS entities for even more detail.

If your organization uses Lync you also have the ability to open a Lync conversation with the changeset author, directly from the CodeLens window. It takes the guess work out of why a change was made and fosters improved communications in the development team.

 

“Is this code healthy?”

Finally, while it’s not part of the default CodeLens experience, the CodeLens team have released an experimental extension you might want to try out.

It shows a maintainability index for your code; a rating of the health of what you’re looking at, and how that maintainability has changed over time. Perfect for staying on top of your technical debt!

 

You can find it on the Visual Studio Gallery by searching for ‘Code Health Indicator’.

Being an experimental indicator means the team is trying out some ideas. It’s not a supported extension and it may well change in functionality over time, but don’t let that put you off. Try it out and provide some feedback and suggestions. I’m sure they’d welcome it.

 

Wrapping Up

CodeLens loads its indicators asynchronously so you shouldn’t feel any performance impact in daily use. Of course, Visual Studio gives you great control of your development experience so if you can disable CodeLens if you want to. You can also restrict it to display a subset of indicators and you can change the font and color of the dashboard text. It’s completely up to you.

So there you have it, a quick look at Visual Studio 2013’s new CodeLens feature. I think it’s a fantastic addition and should be part of every developer’s .NET toolbox! I hope you enjoy it.

 

About the author

Richard Banks is a Microsoft Visual Studio ALM MVP, Professional Scrum Trainer for Scrum.org, author of the Visual Studio 2012 Cookbook and runs the Sydney Alt.Net user group. For some strange reason he gets a real kick out of helping development teams improve their skills and produce great software. You can get in touch with him on twitter at @rbanks54 and follow his blog at http://www.richard-banks.org/.  He currently works as a Principal Consultant for Readify.

 

 

EventSource Improvements Include Better Diagnostics for Diagnostics

$
0
0

Editor's note: In partnership with Microsoft Press, MVPs have been contributing to an ongoing guest series on their official team blog. The following post was written by Visual Basic MVP Kathleen Dollard

Many of the improvements in .NET 4.5.1 are tactical – they improve how you do what you’re already doing now. But a seemingly small change to the EventSource class is strategic – it’s part of changing what you do. In the same way process has been a conversation over the last ten years, diagnostics will be a key strategic conversation for the next ten years. I’ll define diagnostics simply – understanding what’s happening in a badly behaved application, especially in production. Although there’s a lot of room to grow in the surrounding tooling, EventSource tracing will help you today and increase its impact as the tooling matures. The .NET 4.5.1 changes to EventSource make it truly ready to play the central role in your diagnostic game plan.

The .NET framework has had tracing since its very first release and there are several good third party libraries. But in this old style of tracing, your code and the libraries took responsibility for all of the work, tracing was comparatively slow, output was isolated to your application, tracing data was not strongly typed, and the result tended to printf style debugging like Trace.WriteLine(…) or logger.Warning(…).

EventSource is a lightweight tracing tool that can output to many targets. The most important of these targets is the built-in support for Event Tracing for Windows (ETW). ETW is part of the Windows operating systems and is already gathering diagnostics, even while you’re reading this blog post.

To continue reading, click here

About the author

Kathleen Dollard is passionate about community and has been a Microsoft MVP since 1998. She works to turn good programmers great programmers by training and mentoring individuals and teams. You can follow @KathleenDollard on Twitter and watch her blog at http://msmvps.com/blogs/kathleen.

MVP Showcase Goes Bigger

$
0
0

Over 40 MVPs will congregate in one room during the 2013 MVP Summit to share their passion, expertise and projects.  The MVP Showcase started last year and quickly became a favorite event among attendees.  This year, the showcase is bigger!  In addition to MVP participants, product teams and organizations like Visual Studio, Surface and Imagine Cup will stand along side MVPs at the Showcase to highlight the newest technologies, collaborations and global community projects.

"The MVP Showcase adds an additional layer of networking, collaboration and fun to the MVP Summit," says Community Program Manager and MVP Showcase organizer, Kari Finn.  "The response from last year was so overwhelming we knew it would be even bigger this year."

In this expo-type setting, each MVP, MVP team, product team or Microsoft organization will host a booth to demonstrate their work and speak with their peers.  Below is a complete list of MVP presenters, we hope you can join us!

MVP Showcase

Sunday, November 17

4:00pm-7:00pm

Grand Ballroom -

Bellevue Hyatt Regency Hotel

 

MVP Presenters (in alphabetical order):

Akira Hatsune, Developer Tools for Visual Basic MVP
Topic: coloris, a Windows Store App
Description:Learn how everyone can create good sense apps by setting beautiful color balance with a simple step. This application is especially useful for developers and designers who want to make their apps look better.

Dr. Alex Danilychev, Alex Juschin, Virtualization MVP and Remote Desktop Services MVP
Topic: Virtual Display Manager
Description:When physical monitors are just not enough, consider creating Virtual Displays with Virtual Display Manager. Perfect solution for a busy IT PRO or developer. Supports local and remote desktops.

Alon Fleiss and Pavel Yosifovitch, C++ MVP and C# MVP
Topic: OzCode for Visual Studio
Description:OzCode is a productivity tool that helps identify and fix bugs during C# debugging and saves time and effort and ensures that your debugging experience is effective.

Anthony van der Hoorn and Nik Molnar, ASP.NET MVPs
Topic: Glimpse for ASP.NET
Description:Checkout the latest happening with Glimpse for ASP.NET and what community up to. With OWIN, WebForms and MVC5 just scratching the surface.

Bill Wagner, Visual C# MVP
Topic: Humanitarian Toolbox
Description:Bring the people, build the tools & help the world! Learn how you can be a part of a team of community thought leaders that helps to build disaster response apps.

Christian Nagel, Visual C# MVP
Topic: Windows Store Apps: Menu Card & Book App
Description:Experience Windows Store business Apps, their usability and architecture using Windows Azure Mobile Services.

Christian Strevel, Client App Development MVP
Topic: AppFactory
Description:Create awesome apps that incorporate RSS, News, Images and Video without writing a single line of code for Windows 8.1 and Windows Phone 8 with an intuitive web platform! More than 1,500 apps created today!

Gianni Rosa Gallina and Beppe Platania, Windows Embedded MVPs
Topic: Sense3 : Innovative Digital Signage for Conferences
Description:Check out this next Gen Intelligent Digital Showroom innovation that employs a Kinect+Touch UI, is remote controllable in the cloud and is designed for conference events for people to watch videos, browse catalogues and play interactive min games.

Haifeng Liu, Guanjun.Hao and LiChen, ASP.NET and Windows Azure MVPs
Topic: ASP.NET Open-Source Website
Description:Learn more about how China’s largest .Net source code distribution site helps developers use and learn ASP.NET.

Ivan Toledo Ivanovic, Windows Phone Development MVP
Topic: VideoShare
Description:Check out this cool app that uses Proximity API to stream video between Windows Phone and Windows 8.

James Rowland Jones, Richard Fennell, Steve Spencer, Jen Stirrup, Simon Skinner, Steve Beaumont, Steve Spencer, Tony Rogerson, Chris Testa O'Neill, Richard Conway, Andy Cross, Andrew Westgarth, Aidan Finn, Damian Flynn, Windows Azure, ALM, Windows Server, System Center and SQL Server MVPs
Topic: MVP Cloud OS Week and Relay
Description:During this first of a kind event in the UK and across the world, more than 18 MVPs from the UK and Ireland delivered sessions that explain Microsoft’s Cloud OS story to IT Professionals and Developers. During the course of the week, MVPs talked about data insights, transforming the datacentre, the hybrid organisation, enabling modern apps and people centric IT to over 400 attendees.

Jason Lee and Walter Wong, Visual Basic and Developer Security MVPs
Topic: Taiwan Community Rising Stars and MVP APPs Development
Description:Learn more about the events and activities where Windows Phone and Windows 8 apps are developed by MVPs and Community Rising Stars.

Jeff Arnett, Windows Embedded MVP
Topic: High volume digital oscilloscopes running Windows CE
Description:Agilent Technologies’ next generation high volume portable oscilloscopes are meeting industry needs for performance, reliability, and flexibility by leveraging Windows CE, CF.NET & MFC. See an automated demo of over a million trace points per second on this oscilloscope while also exhibiting a modern look and feel and intuitive user interface all based on Windows CE.

Jeff Guillet, Exchange Server MVP
Topic: Build Your Own Private Cloud
Description:Nothing advances your technical understanding like hands-on experience. Build your own super-fast Windows Server 2012 R2 Hyper-V server for about $1,000 USD!

Jeff Paries, Client Development MVP
Topic: DADO
Description:See how Microsoft technologies are working with the Internet of Things.

Joost van Schaik, Tom Verhoeff, Rodolpho Marques Do Carmo and Matteo Pagani, Windows Phone Development MVPs
Topic: Windows Phone Week
Description:Some guys and a crazy idea: an accidental global event about Windows Phone!

Joshua Blake, Kinect for Windows MVP
Topic: Real World Kinect apps
Description:Check out real Kinect for Windows applications and learn how Kinect is being used to solve real-world problems today.

Ken “The Hoff” Lasko, Lync MVP
Topic: Lync Dialing Rule Optimizer
Description:Learn how this free web-based tool can drastically simplify the task of implementing Lync Enterprise Voice dialplans, voice policies, routes and usages.

Kendra Little, Jeremiah Peschka, and Jes Schultz Borland, SQL Server MVPs
Topic: sp_Blitz™ Windows App
Description:Learn how we help SQL Server users worldwide quickly diagnose and fix SQL Server problems with a free ClickOnce Windows desktop app!

Kinfey Lo, Client App Development MVP
Topic: Ebooks in Education Windows 8 App
Description:Learn more about the app that deploys educational content across platforms and devices and is now considered China’s best mobile learning solution.

Lars Keller (@larskeller), VSTO MVP
Topic: Mobile Incident Manager: a SCSM Service Excellence with Windows 8 App
Description:Learn how this Windows 8 App for the Microsoft System Center Service Manager reduces traveling and logistical inefficiencies of the field service support employee, allowing them to spend more time on resolving incidents.

Lorenzo Maiorfi and Mirco Vanini, Windows Embedded MVPs
Topic: Build a Smart Home System (and Smart Factory, too) with Microsoft Embedded Technologies
Description:Let’s take a look to a reference IoT/M2M application driving both smart home devices (through Z-Wave) and industrial appliances (through ZigBee and ISM).

Mathias Brandewinder, Tomas Petricek, and Ryan Riley, Visual F# MVPs
Topic: VegaHub, Needle and Charon
Description:Building an open source, integrated tool chain for interactive Data Science and Machine Learning in F#.

Mayur Tendulkar, Windows Phone Development MVP
Topic: Experts Connection: A Windows Phone App to locate MVPs nearby
Description:Traveling to new city? Share passion about technology? Want to discuss technology over coffee/beer or need help about Windows? There is a chance that you’re near a Microsoft MVP. Find your MVP friends right on your device.

Michael L Perry, Client App Development MVP
Topic: Correspondence Occasionally-Connected Client Framework
Description:Sync in real-time while online. Queue changes locally while offline. This open-source framework makes it easy to collaborate and share across all .NET mobile device platforms.

Michael Samarin, Client App Development MVP
Topic: Windows Phone Imaging and Camera capabilities
Description:Learn about Nokia Imaging SDK, working with image processing, custom filters, camera, hi-res images and optimizing apps for 1080p devices.

Mikael Nystrom and Johan Arwidmark, SharePoint Deployment Planning Services (SDPS) and Enterprise Client MVPs
Topic: Build an entire WS/SC infrastructure faster than ever before
Description:Learn how to build an entire lab, test, Poc and production environment using hydration, a form of super automated solutions that are driven by simple tools like PowerShell and Microsoft Deployment Toolkit.

Mike Halsey, Windows Expert Consumer MVP
Topic: Consumer Video Production
Description:Reaching out to wider communities. Learn how to make professional quality tutorial videos and presentations with an introduction to the tools of the trade.

Morten Nielsen, Silverlight MVP
Topic: ArcGIS Runtime SDK for .NET
Description:See how you can build advanced 2D and 3D map apps and do GIS analytics in Windows Phone, Windows Store and WPF using XAML and .NET.

Peggy Reuter-Heinrich, Client App Development MVP
Topic: MVP Central
Description:Check out the Windows 8.1 app for German speaking MVPs that displays MVP skills, an MVP wall of fame, a presentation of the single MVP with full information, a description of the program and how to be an MVP, the MPV twitter channel and a picture of our MVP locator.

Peisheng Chen, Visual C++ MVP
Topic: Colored Fish Tank, a Chinese Windows Phone 8 Game
Description:Check out this classic game of elimination that’s easy to play, fun to look at and perfect to pass the time.

Rodrigo Díaz Concha, Silverlight MVP
Topic: Windows Phone 8 Online Diploma Training and MVA Content
Description:Check out the training material for Windows Phone 8 development for the Latin American Windows Phone 8 Online Diploma Course, with more than 10,000 people registered.

Rudy Huyn, Windows Phone Development MVP
Topic: Giant wall of Windows Phones
Description:Learn about how we get the world record of the biggest number of connected and synchronized phones. See 200 windows phone connected and synchronized together driven by another windows phone displaying a giant map!

Samuel Blanchard, Windows Phone Development MVP
Topic: The World of Emulation on Windows Phone
Description:If you are fan of retro-gaming, come see Samuel talk about emulation and 8bits games ! He Is the author of PurpleCherry, a Gameboy emulator on WindowsPhone, and is responsible of the port of many others. 8bits power!

Shaun Walker, ASP.NET MVP
Topic: DNN
Description:Learn about the largest open source project native to the Microsoft platform that makes designing, building and managing feature-rich sites and communities fast, easy and cost-effective.

Shehap El-Nagar, SQL Server MVP
Topic: SQL Server Performance Tuning, Community and its SQL Server 2014 blogs
Description:Learn more about the biggest SQL Server Community in the Middle East, with 4,500 members , 1K Facebook followers, 150 blogs and a YouTube channel.

SungHo You, Visual C# MVP
Topic: WatchDoing: a Remote Screen Monitoring System
Description:Learn more about this new conceptual remote screen monitoring service that allows you to view real time activity anytime and anywhere using a PC webcam, a smartphone camera, or CCTV.

Zoiner Tejada and Michele Leroux Bustamante, Windows Azure MVPs
Topic: n2y: Apps for education
Description: Learn about a new cloud-based platform for the creation and publishing of interactive, symbolated, speaking documents, for education.

Xiaoming Jia, ASP.NET/IIS MVP

Topic: LADNS - A Lightweight Location Aware DNS Server

Description: Learn more about this lightweight DNS server that can accelerate your website/server based on your user's location. If you have multiple network
links served by different ISPs and you want to
resolve your website with different IP addresses based on your users location, you can
take the advantage of this lightweight but powerful
DNS (LADNS) Server.

 

 

 

Three Top Features That Will Help Improve Your Productivity with Visual Studio 2013

$
0
0

Editor's note:  The following post was written by ASP.NET MVP Patrick Desjardins

Three Top Features That Will Help Improve Your Productivity with Visual Studio 2013

Visual Studio 2013 is packaged with a lot of new features. Three of them will be your next best friend. Not only are they easy to use but they don’t require a lot of deep configuration to activate. We will discuss enhanced debugging that shows the return value, the possibility of rapid access to the code that is used anywhere in the code editor window and how to improve navigability inside your files.

The first feature is an enhancement when you are debugging. It’s now possible to see the return value of a method. It’s available directly from the Autos Window, inside the Watch Window or can be displayed with a command inside the Immediate Window. To be able to see the return value, you need the caret at the method’s ending curly bracket. This is a logical place to be because this feature allows you to see the returned value of the method you are in. This feature is only available if you are debugging.

Once you are correctly positioned you can type the variable $ReturnValue into the Immediate Window and hit the Enter key to display the returned value. The value will be displayed right below the command. If your Immediate Window is missing in debug, simply go to Debug, Windows, Immediate.

 

It can be accessible with the $ReturnValue keyword typed directly into the Immediate Window as we just saw, but is also available in the Watch Window. In both cases, the dollar sign is required to be able to see the value. This variable can be handy to have in the Watch Windows since it’s always available in each of your methods when testing. If you do not see your Watch Windows, go to Debug, Windows, Watch when debugging.

 

Another possibility is to use the Autos Window. You need to go to Visual Studio under Debug, Windows, Autos to activate the windows.

 

 

This is even more interesting than both of the previous windows. First, because you have nothing to write, it’s automatically available for you. Second, if we have a method that calls another method within the return value, both return values will be displayed. This means that the value of $ReturnValue is displayed as previously but also underlying return value from the other method. Here is a snippet of code that displays a method named “Method1” that calls another method called “Method2” and so on until it reaches “Method3”.

classProgram

{

        staticvoid Main(string[] args)

        {

            var x = Method1();

        }

 

 

        private static string Method1()

        {

            return"Method1 call " + Method2();

        }

 

        private static string Method2()

        {

            return"From Method2 " + Method3();

        }

 

        private static string Method3()

        {

            return"From Method3";

        }

}

If we have a breakpoint at the ending curly bracket of Method1, we will see in the Auto Window the 2 lines. The first line is the “Method2” returned value and the second line is the returned value from “Method1”. This is quite powerful to not only know the returned value of the debugged method but also the one used in the return statement.

 

The second feature that will be in your development belt is called Peek Definition. Peek Definition improves your productivity by allowing you to display code that is outside the current file. This means that if you are using a method from a class that resides in another file, you do not need to switch back and forth between those two files. To see the content of a method, you only need to hit the shortcut ALT+F12 and the method’s code will expand below the method call, inside a panel. At first, it may look like a simple addition to Visual Studio but after a few minutes Peek Definition shines by improving the organization of your workspace file. Now, files for reading do not need to stay open.  It’s fast to use, reduces the amount of open files in your Visual Studio workspace and it allows you to code faster because you do not need to switch your focus.

Here is an example. Let’s say that the “Program” class is in one file, the “OneClass” is in another while the “TwoClass” is in a third. If you are writing code inside the “Main” method of “Program” class and want to have a quick look of “OneMethod”, before Visual Studio 2013, you had to open the “OneClass” file and close it.

    classProgram

    {

        static void Main(string[] args)

        {

            var one = newOneClass();

            var oneMethodResult = one.OneMethod();

        }

    }

    public classOneClass

    {

        public string OneMethod()

        {

            int i = 110;

            string message = "This is a string";

            return string.Concat(message,i, newTwoClass().TwoMethod());

        }

    }

    public classTwoClass

    {

        public string TwoMethod()

        {

            return"This is from TwoClass";

        }

    }

 

Now, with Visual Studio 2013, you just need to hover the method you want to access and hit Alt+F12 or right click on the method and select Peek Definition.

 

 

This will open the Peek Definition window.

 

From here you have more options. The first one is to simply take a look and close the window. This can be very fast with the shortcut Alt+F12 to open Peek Definition window and to press the Escape key to close the window. Furthermore, you can edit the code inside Peek Definition. That mean you can do quick edits without leaving the code you are developing. This is the essence of this feature: limit your focus on the current task you are coding. If you need to have more than quick access, you can click the folder icon at the top right of the Peek Definition window. This will open the file that contains the method currently Peeked. The last move you can do with the Peek Definition is to Peek Definition the Peek Definition! Not only can you see the code of one method but you can also see the code under the method that you Peek. If we look back at the example above, you can also see what “TwoMethod()” contains by pressing again on Alt+F12.

 

This time, a navigation menu built with a blue dot appears. When you click the first dot, you go in to the Peek Definition of the first method. If you click the second dot, you Peek the inner method of the Peek Definition. This feature is very handy and will help you to keep focusing on the code you are working without having many files remaining open for no reason.

 

The third features is an improvement of the scrollbar. With Visual Studio 2013, it’s possible to see a map of your code directly in the scrollbar. This allows to have a big picture of the current file and to click on it to have fast access to a specific part. Not only is the navigation easier but it simplifies code refactoring by giving easier access to sections of the file. To activate this feature, you need to right click on any scrollbar and select “Scroll Bar Options…”.

 

 

From here you can select a new behavior called “Use map mode for vertical scroll bar”. This will enlarge the vertical scroll bar by the size you select from the source overview drop down. The example below displays a medium definition.

 

Having the new scrollbar activated in “map mode” produces the following effect.

 

You now have an overview where you are located in the file and can quickly find the portion you may want to move to. Finally, the map scrollbar has an option to preview the mapped code inside a tooltip. As you may have noticed, the scrollbar does have a white section that represents the portion that is displayed at the screen. If you move your cursor anywhere at the scrollbar you will see a tooltip with the code in a bigger format which is readable. The screenshot below displays the tooltip in action (labeled “1”) and the the current code displayed in the editor (labeled “2”).

 

 

Your productivity should improve with these 3 features. One allows you to quickly see what is returned. This solves the problem that we had to know the return value by declaring an additional variable just before the return statement . The second allows us to focus on the main task in your development. The Peek Definition is the way to navigate the code for a short time without losing focus or having to open many files that stay open for nothing. Last, we have seen how to use the scrollbar as a map of your code. With the trend of wide screen, this scrollbar is perfect for every developer by using dead space to improve a quick way to navigate inside each of your files.

About the author

Patrick Desjardins is programming since 2000 and has been focusing is knowledge on web since 2002. He has a broad knowledge of the web by developing with PHP and Asp. Since few years he is developing exclusively with Asp.Net MVC which is his favorite framework. Patrick likes to develop solution that is easily maintainable and scalable. This is why he loves to refine how to design web site to have good design pattern without over doing it. Patrick also have work on many websites, about half and half on intranet and extranet and has various experiences which has lead him to be Microsoft Asp.Net MVP for 2013. You can find his blog at http://patrickdesjardins.com/blog and can contact him on Twitter @MrDesjardins in English or in French.

 


Windows 8 /8.1 ApplicationDataContainers and VB

$
0
0

Editor’s note: The following post was written by Visual Basic MVP Ockert Johannes du Preez

Windows 8 /8.1 ApplicationDataContainers and VB

Storing little pieces of information is pivotal to any program. The trick however is:  knowing what tools are at your disposal in such cases. Many a programmer is still trying to wrap their heads around Windows 8’s way of doing things. Windows 8.1 is not an exception. The Windows Store is an even bigger culprit when it comes to topics such as storing info. Why? Because Windows Store apps do not make use of Desktop app’s APIs and tools. It has its own set of tools and APIs with which we need to program.

Enough small talk, let’s get started.

To store small bits of information in your Windows 8 / 8.1 Store app, you will need to use the ApplicationDataContainer class. There are in effect, two separate ways you could use this class. You could use it to store your data locally ( on the local application data store ), or the roaming application data store ( this means that it is easy to sync information across multiple devices ).  Today I will concentrate only on the local application data store.

The ApplicationDataContainer class has two methods and four properties. The methods are: CreateContainer and DeleteContainer. The properties are: Containers, Locality, Name and Values.

Let us put all of these to use.

Fire up Visual Studio 2013 Preview on Windows 8.1 Preview, or Visual Studio 2013 RC on Windows 8.1 RTM, and start a Visual Basic Windows Store project. You can give it a descriptive name. Design your MainPage.xaml to resemble the following picture:

Figure 1Our design

 

The resulting XAML Code looks as follows :

Code Segment 1XAML code for MainPage.xaml

    <GridBackground="{StaticResourceApplicationPageBackgroundThemeBrush}">

        <TextBlockHorizontalAlignment="Left"TextWrapping="Wrap"Text="ApplicationDataContainers in VB 2013 and Windows 8.1" VerticalAlignment="Top" FontSize="48" Margin="62,48,0,0" Width="1222"/>

        <Button x:Name="btDelData" Content="Delete Data"HorizontalAlignment="Left"VerticalAlignment="Top"Margin="528,290,0,0"/>

   <Button x:Name="btDelData" Content="Read Data"HorizontalAlignment="Left"VerticalAlignment="Top"Margin="379,290,0,0"/>

   <Button x:Name="btDelData" Content="Save Data"HorizontalAlignment="Left"VerticalAlignment="Top"Margin="237,290,0,0"/>

        <TextBlock HorizontalAlignment="Left"TextWrapping="Wrap"Text="Enter Text To Be Saved"VerticalAlignment="Top"Margin="240,209,0,0" FontSize="16"/>

        <TextBlockx:Name="tbAppText"HorizontalAlignment="Left"TextWrapping="Wrap"Text="TextBox"VerticalAlignment="Top"Margin="240,233,0,0"Width="398"/>

           <TextBlockx:Name="tShowText" HorizontalAlignment="Left"TextWrapping="Wrap"Text="Saved Text Will Be Shown Here Once Read Data Has Been Clicked"VerticalAlignment="Top" Margin="670,199,0,0" Height="126" Width="576" FontSize="14"/>

 

    </Grid>

Obviously, you could copy and paste this code directly into your XAML code editor, but if you have chosen to have different names for your objects, yours would then look different.

Open up MainPage.Xaml.vb by double clicking on any of the created controls and add the following namespace declaration at the top of your code :

Code segment 2Imports statement

Imports Windows.Storage 'Import Storage Namespace To Access Storage Libraries

 

We need the Windows.Storage namespace so that we will be able to store and retrieve information. Add the following modular variable :

Code segment 3Modular variable that will assist in using the ApplicationDataContainer class

    Private adcContainer AsApplicationDataContainer = _

    ApplicationData.Current.LocalSettings 'Create LocalSettings Object To House All Containers & Data

 

Here we have created a variable named adcContainer which is a LocalSettings object for working with our local data. Now we get to the meat of our little program. The next order of business is to add the code to store a value with the use of the adcContainer object. Add the following code :

Code segment 4Storing Local data

    'Store Information

    Private Sub btSaveData_Click(sender As Object, e AsRoutedEventArgs) Handles btSaveData.Click

 

        'Create Named Container For Our Data

        Dim adcDataKey AsApplicationDataContainer = _

            adcContainer.CreateContainer("AppData_Example_Container", ApplicationDataCreateDisposition.Always) 'Always Create Container Again

 

        'Ensure Container Does In Fact Exist

        If adcContainer.Containers.ContainsKey("AppData_Example_Container") Then

 

            'Store TextBox Text Into Newly Created Setting

            adcContainer.Containers("AppData_Example_Container").Values("AppData_Example_Value") = tbAppText.Text

 

        End If

 

    End Sub

 

First, we created a container named AppData_Example_Container for our data that we want stored. Whilst creating this object we specify that it should always create the container. We then did a small test to see if this container object exists, then store a value inside of it. This value we obtain from the TextBox named tbAppText. We are halfway!

Once the information is stored, we need to obviously retrieve it, let us add the retrieval code now :

Code segment 5Retrieving Local data

    'Retrieve Informartion

    Private Sub btReadData_Click(sender As Object, e AsRoutedEventArgs) Handles btReadData.Click

 

        'Determine If We Have A Key To Work With

        Dim blnContainerExist As Boolean = adcContainer.Containers.ContainsKey("AppData_Example_Container")

 

        Dim blnSettingExist As Boolean = False 'To Determine If A Valid Value Present In Key

 

        If blnContainerExist Then'Container / Key Exists

 

            'Check To See If It Has A Valid Value

            blnSettingExist = adcContainer.Containers("AppData_Example_Container").Values.ContainsKey("AppData_Example_Value")

 

            'Display Retrieved Value

            tShowText.Text = adcContainer.Containers("AppData_Example_Container").Values("AppData_Example_Value")

 

        Else

 

            tShowText.Text = ""

 

        End If

 

    End Sub

In the above code segment, we do a test to see whether or not the DataContainer named AppData_Example_Container exists or not. We then do another test to see if there is a value stored inside the AppData_Example_Container. If there is indeed a value, we display the value present. If there isn’t a value, we show nothing.

If we were to run this app now, we will be greeted with a screen similar to Figure 2, where we can enter information.

Figure 2Entering and Storing data

 

 

Once we have stored the data and clicked on Read Data, your screen will resemble Figure 3.

Figure 3Retrieved Data

 

The finish line is in sight! We already have a working project that we can build on and customize exactly to our needs. I do, however just want to show you one more thing, that is: deleting ApplicationdataContainers. Let us add the last code for our program :

Code segment 6Deleting ApplicationdataContainers

'Delete Key / Container

    Private Sub btDelData_Click(sender As Object, e AsRoutedEventArgs) Handles btDelData.Click

 

        adcContainer.DeleteContainer("AppData_Example_Container") 'Delete Container

 

    End Sub

 

This simply deletes the Container named AppData_Example_Container

I hope you have enjoyed today’s lesson as much as I have, until next time, happy coding!

 About the author

Hannes (AKA. Ockert J. du Preez ) has been actively engaged with the community since 1998. He has been involved with CodeGuru since 2001. Since then he has accumulated almost 11,000 posts on CodeGuru. He has been writing articles for CodeGuru since 2005. He is a trainer at National Computer College Vereeniging (South Africa) since 1998. He provides sessions on the following languages: Visual Basic 5 – 6, Visual Basic.NET, C, C++, C#, Java, HTML 4-5, CSS, JavaScript, PHP, VBScript, SQL and ASP.NET. You can learn more about Hannes on his website or Facebook page.

About MVP Monday

 

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

What’s in Store at the MVP Global Summit

$
0
0

By Paulette Suddarth

This is a special year for the MVP community. We are coming to the end of the 20th anniversary year of the MVP Award and it will be bookended, for the first time, by two MVP Global Summits.

This milestone anniversary—so momentous in our relatively young technology industry—has given all of us cause to celebrate, but that’s not why we are having two MVP Global Summits this year.

The reason for next month’s Summit—as it is with all MVP Global Summits—is the opportunity to bring together more than a thousand MVPs from around the globe and hundreds of Microsoft team members to share their ideas and news about Microsoft technologies. And with so many Microsoft services and devices launching now and in the coming months, we thought this is the perfect time to sit down for deep dive sessions with these exceptional community leaders.

One of the notable launches is Microsoft’s flagship device, Surface 2, which has been creating a tremendous buzz among MVPs. From developers to IT professionals, business owners to information workers, we’ve been hearing loud and clear that they want to better understand how this device can help them enhance the technology experience of their communities.

As a result, the Surface product team is making it easier to get this important device into the hands of these community leaders. They will be providing MVPs who are able to join us at the MVP Global Summit with a voucher to help in the purchase of any Surface device, as well as a complimentary Surface keyboard. MVPs will be able to take advantage of this special offer at the Bellevue Microsoft Store anytime during the MVP Global Summit, including two private events at the Store. The product team is looking forward to hearing what they think.

The MVP Global Summit provides all of us at Microsoft the opportunity to receive direct feedback from hands-on experts who have the pulse of their communities. One of the things they’ve told us about the MVP Global Summit is that we request too much feedback! So I sat down to fill out the MVP Global Summit surveys and, as usual, the MVPs were right. It took me 47 minutes. So I looked at ways to improve the process and the consolidated survey now takes me seven minutes. I invite all MVPs attending the Summit to beat my time. And if that isn’t enough incentive, we have plenty of others in store.

Making Web Development Wonderful Again with Web Essentials

$
0
0

 

Editor's note:  The following post was written by Visual C# MVP Schabse Laks

Visual Studio is a fantastic development environment for .Net-based Windows and web projects.   However, this view of Visual Studio overlooks one of its most useful features: the text editor.  Visual Studio includes a powerful text editor that can easily be used to edit standalone files without a project.

Unlike competing development editors such as Sublime, Visual Studio maintains a full-featured parse tree as you edit each file.  This allows VS to provide unparalleled auto-completion and document-based actions, and can make the development experience much faster and more pleasant.  These features are especially powerful in HTML and CSS documents.

All of this power is also available to extensions, making it simple to extend the editors with powerful new features to suit your needs.  (I wrote a separate blog post about building your own extensions)  A poster-child example is the open-source Web Essentials extensionby Mads Kristensen.  This extension adds a variety of useful features that are essential to modern web development.  You can install Web Essentials from the Extension Gallery in Visual Studio:

Using the text editor alone

To use the VS text editor, you can simply drag any text file into the editor.  To work more efficiently with an entire project, you should create a Web Site project to hold all of your files for easier navigation.  Click File, Open, Web Site, then select the folder containing your existing files (or create a new folder to start from scratch).  Web site projects will automatically list all files in the directory you select; there are no project files or inclusion lists to worry about.  If you create a new file outside of Visual Studio, simply click the Refresh button in Solution Explorer and it will show up as expected.

With Web Essentials, Visual Studio now has everything you need for a complete modern web development workflow.  Visual Studio has built-in git support to manage source control, and Web Essentials can compile LESS files, run JSHint on your Javascript code, and minify and combine all of your static resources.  You can also use the separate Image Optimizer extension to optimize PNG images.  You can easily install and update popular 3rd-party libraries like jQuery or Bootstrap from NuGet.

To test your HTML locally, you can simply press F5, and Visual Studio will run IIS Express to host your site on a random port on localhost.  No admin privileges needed.  Visual Studio is thus especially useful for purely static sites hosted on GitHub pages.  If you need to upgrade to an actual server, Windows Azure is just a few clicks away.  You can also easily publish your site to an existing host over FTP or Microsoft Web Deploy.

HTML Files

Visual Studio 2013’s HTML editor was rewritten from scratch on top of the modern WPF-based editor stack.  This new editor eliminates all of the formatting and validation issues with the old editor, and opens the door for extensions like Web Essentials to add all sorts of additional features.

The editor is now much smarter about HTML attribute IntelliSense.  All of the aria-* attributes are now grouped into a single entry to avoid cluttering IntelliSense:

 

The editor also knows about attributes used by common web development frameworks, such as jQuery Mobile, Knockout.js, and AngularJS:

 

As you might expect, the editor will warn you about invalid HTML as you type.

Web Essentials adds more features to the HTML editor.  You can press F12 (Go to Definition) on any local path (e.g., an image, a link to a separate HTML file, or a reference to a JS or CSS file) and jump directly to that file.  You can see referenced images in tooltips.  You can even press F12 on a CSS class and it will try to find a matching selector in CSS or LESS files in your project.

Web Essentials also adds support for Emmet/ZenCoding

CSS Files

Web Essentials really shines in its CSS features.  Visual Studio’s standard CSS editor already features IntelliSense for CSS pseudo-classes/elements, property names, and values.  It will and will give warnings for invalid selectors or properties.  It even recognizes CSS hacks and will not show warnings for invalid CSS from common hacks.

Web Essentials adds lots of nice extras to the CSS editor.  It adds get full IntelliSense for selectors as well, including all HTML elements and all CSS IDs & classes in the file.  It even adds auto-completion for attribute names & values, filtered to only show attributes that apply to the element for element selector.  It also adds a small preview every time you write a color (by name, HEX, RGB, or HSL). 

You can see browser support information in the tooltip for each CSS property, along with sample usage.  If you use a property that has vendor-specific versions for some browsers, Web Essentials will warn you if any of the vendor-specific (or standard) versions are missing, and will automatically update all of the vendor-specific versions as you change the value of the standard property (this feature does not work for vendor-specific values of standard properties).

You can use smart tags to easily convert between color formats, or to add CSS hacks to selector blocks to make them visible only in older versions of IE.  You can see the specificity in a tooltip for each selector, to help resolve importance issues.

Web Essentials can warn you when you have duplicate CSS properties or selectors, or other issues like overqualified ID selectors or extra units on zeros.

As you type URLs (for images or fonts), Web Essentials will provide IntelliSense for folders and files in your project to help you select an existing file.  Once you’ve entered the URL, you can see a preview of the image in a tooltip, and you can convert it to an embedded data URI in a smart tag.  You can also

Web Essentials can also automatically generate minified (and optionally also gzipped) copies of each CSS file as you save.  You can also drag an image or web font from the Solution Explorer (or even Explorer) to insert a background property or @font-face directive.

You can press Ctrl+Shift+Up/Down to increment/decrement numbers in CSS properties, or to lighten/darken hexadecimal colors.

LESS Files

Visual Studio has a first-class LESS editor, including all of the features from the CSS editor.  Web Essentials adds a preview pane on the left side to show the generated CSS, automatic compilation on save, and tooltips to show the generated selector from complex nested LESS selectors:

 

Web Essentials also adds useful refactoring command to extract repetitive code into a common variable or mixin.

There are a couple of LESS language features that the editor does not support yet (in particular, nested media queries).  The LESS syntax tree is part of Visual Studio itself and cannot be fixed by an extension; this will be fixed in an upcoming Visual Studio update.

Javascript Files

Visual Studio’s Javascript editor was completely rewritten in VS2012.  The new editor will actually run all of your Javascript code using a specially modified version of IE11’s fast new Chakra Javascript engine.  This means that Visual Studio can know exactly what all of your code is doing, no matter how complicated it is.  Even if you write code using Array.forEach to add functions to an object programmatically, those functions (and their return values) will show up correctly in IntelliSense.  It can even figure out what values are passed to asynchronous callbacks.  This makes Javascript development in Visual Studio much more pleasant than other editors.  Unfortunately, Visual Studio’s IntelliSense engine is not Node.js aware, ruining what could have been the best Node.js development experience ever.  I’m working on a project to fix this, but it’s not finished yet.

The dark side of the new Javascript is that it’s written entirely in native code and is completely closed; unlike other language services, VS extensions cannot see the syntax tree or fully participate in the editing experience

Even so, Web Essentials does add some features.  After installing Web Essentials, you’ll get IntelliSense for strings passed to DOM selector functions like getElementById, getElementsByTagName, and getElementsByClassName (full CSS selector IntelliSense for querySelector or jQuery is not yet possible). Similarly, you’ll get IntelliSense for module names or file paths passed to the Node.js require() function.  There is currently no IntelliSense for module names passed to AMD/RequireJS’ require() function.  Pull requests are welcome; tweet me for details.  On a smaller but still useful note, you’ll get IntelliSense for "use strict"; and "use asm"; for writing modern code.

You can also press F12 on a Visual Studio /// <reference> directive or Node.js require() call to jump to the included source file.

Web Essentials can also automatically run JSHint whenever you save a JS file, including full support for .jshintrc and .jshintignore configuration files.  Like CSS, it can also automatically minify and gzip your code on save (and compile CoffeeScript).

Bundling

To optimize performance, websites should combine all CSS and Javascript files into a single bundle of each type.   This eliminates the overhead of making a separate HTTP request for each individual file, and can drastically improve page load times on unreliable connections.  For more information, see Google’s Page Speed documentation.  (Note that upcoming standards like SPDY and HTTP2 use protocol-level parallelization to offer better performance for unbundled resources; see here)

ASP.Net 4.5 added a server-side bundling system in Microsoft ASP.NET Web Optimization; see here for more detail.  When writing a web application that involves server-side code, this (or similar frameworks like Cassette) is the ideal way to add bundling, it lets the server take care of everything for you automatically.

For static sites (e.g., GitHub pages), server-side bundling is not an option.  Instead, Web Essentials has its own bundling framework that lives inside Visual Studio and saves bundled files to disk.  To get started, select all of your JS or CSS files in Solution Explorer, right-click, then click Web Essentials, Create Javascript (or CSS) bundle file.

 

This will ask you for a filename, then generate an XML file listing all of the files in the bundle:

<?xmlversion="1.0"encoding="utf-8"?>

<bundleminify="true"runOnBuild="true"output="frameworks.js">

  <!--The order of the <file> elements determines the order of them when bundled.-->

  <file>/Scripts/bootstrap.js</file>

  <file>/Scripts/jquery-1.10.2.js</file>

  <file>/Scripts/knockout-2.3.0.js</file>

  <file>/Scripts/knockout.validation.js</file>

  <file>/Scripts/modernizr-2.6.2.js</file>

  <file>/Scripts/respond.js</file>

</bundle>

Every time you save the bundle file or source files, or build the project, Web Essentials will concatenate and minify (if enabled in the XML) all of the source files and save the output file.

As the auto-generated comment suggests, you can re-order the <file> tags to change the order in which the files will be concatenated, allowing you to ensure that libraries come before their dependencies.  You can also drag other source files into the XML.

Note that you should not include .min.js files in a bundle; Web Essentials will minify the whole bundle for you (and you certainly don’t want to include both minified and non-minified versions of a script, or it will run twice).  In fact, if you’re using bundles, you should probably delete your minified source files entirely.

Browser Link

Browser Link is the star web feature in Visual Studio 2013.  For the first time, the editor now has a real-time, two-way link with the browsers running your web apps.  Browser Link automatically injects a piece of Javascript into your pages to connect to a Signalr server hosted by Visual Studio, using Web Sockets in supporting browsers and falling back to slower techniques in older browsers.  That’s right; this works in any modern browser– all the way back to IE8 – without needing any browser plugins or special settings. 

The most obvious use of this feature is live reload; you can now press Ctrl+Alt+Enter in Visual Studio to instantly refresh every browser viewing your site.  However, the true power of Browser Link is its extensibility.  You can easily create a Browser Link addin simply by writing Javascript code (typically with jQuery) to run in the browser, plus some C# code to run in Visual Studio, and Browser Link lets them easily communicate with each-other.  Visual Studio will give you source mappings to allow you to find the underlying HTML behind any DOM element in the browser, even from within Razor partials or loops. The power of this feature is limited only by the possibilities you can imagine; as people grow more accustomed to this type of real-time communication, we should see lots of awesome extensions.

Web Essentials includes a number of Browser Link extensions.  You can use Design Mode to click any element and edit its text directly in the browser, and have your edits reflected in source code.  This is a great way to make minor textual corrections on a website, especially when working together with a copyeditor / proofreader.  You can use Inspect Element to click any element in the browser and jump to its source code in the editor, allowing you to easily find out where that HTML came from and make deeper changes.

You can use F12 auto-sync to change CSS properties in your browser’s dev tools, and have them reflected in real time in your CSS (or even LESS!) files in Visual Studio, and in any other browsers connected to Browser Link.  This is an extremely powerful feature for making iterative stylistic fixes; it greatly reduces the gap between making your site look exactly the way you want it to via minor tweaks in the dev tools, and rewriting those changes back in the editor after making it perfect.

You can use the Unused CSS detector to audit your CSS selectors and see which ones don’t match anything in the page.  For CSS selectors that only apply in some conditions (e.g., validation), you can click Start Recording CSS, interact with your page so that all of the selectors should apply, then go back to Visual Studio.  You will see all of the selectors that didn’t match anything in the warning list.

You can see the status of this feature in the Browser Link dropdown button, next to the Run button in the toolbar.  If the button is disabled, make sure that your project targets .Net 4.0 or later, and that your Web.config has <compilation debug="true">.  (To enable it in release mode, add <add key="vs:enableBrowserLink" value="true" /> to <appSettings> instead).  Browser Link is implemented entirely as normal ASP.Net modules, so it can work with any local ASP.Net server.

From this menu, you can invoke Browser Link extensions, and show the Browser Link dashboard.  The dashboard shows all currently connected browsers, and allows you to run extension commands against individual browsers.  The dashboard also shows troubleshooting tips to help solve common problems.

 

 

 

Browser Link can also work with static HTML files; you can use it to debug static sites even if .Net is not involved at all.  However, you must still use a .Net 4.0+ based server (such as IIS or IIS Express) so that the ASP.Net module can run.  You also need to register the module in Web.config so that it runs for static pages; see the documentation.

Solution Settings

Many of Web Essentials’ settings (especially CSS / LESS compilation settings) are highly specific to the projects you’re working on.  For some projects (e.g., static sites), you may want Web Essentials to compile everything; for others (e.g., larger sites with deployment-time CDN setups), you may want that to be handled elsewhere, and use less.js in the browser for development.

To support this, you can right-click your solution and click Web Essentials, Create Solution Settings.  Web Essentials will create a WE2013-settings.xml file in the solution directory, initialized with the current system-wide Web Essentials settings.  Once this file exists, Web Essentials will read all of its settings from the file, and the Web Essentials settings pages in Tools, Options will write to the file in your solution instead of the system-wide Web Essentials settings file.  You can check the file into source control to make sure that all developers on a team share the same settings.

Unfortunately, this feature only applies to Web Essentials settings; there is still no good way to save (e.g.) C# formatting settings on a per-project basis.  However, you can set tabs/spaces settings on a per-project or even per-file/directory basis using EditorConfig and its Visual Studio extension.

 

About the author

Schabse has been writing code in various languages since he was 12, starting with VB6 and moving on to more decent languages like C# and Javascript, including modern web client development and Node.js.  He enjoys explaining complicated ideas on StackOverflow, believing that understanding the concepts they're working with is the best way for budding programmers to succeed in the real world. Lately, he's been focusing on functional programming and more-powerful type systems.  He blogs at http://blog.slaks.net and can be found on Twitter at @Schabse.

 

 

New Features in Blend for Visual Studio 2013

$
0
0

Editor's note:  The following post was written by Visual C# MVP Alvin Ashcraft

Introduction

Developers have been building great user experiences with Blend since Expression Blend v1 was released in April 2007. With the release of Visual Studio 2012, Expression Blend became Blend for Visual Studio. Microsoft is now launching another release of the tool, Blend for Visual Studio 2013. Let’s take a tour of what’s new in this new Blend release.

The Look

The icon continues to evolve, and the new icon echoes its tighter relationship with the Visual Studio IDE.

 

Blend’s icon now sports the same angular infinity design as Visual Studio. The only difference is that Blend keeps its signature blue color while Visual Studio continues to use the new purple color adopted with the release of Visual Studio 2012 a year ago.

Guides and Rulers

Probably the most powerful new feature in Blend is the Guides system. The idea allowing developers and designers to snap visual elements to custom defined guides is not new. What makes Blend’s guides so useful for app developers is that they are dynamic. They can be either edge of the design surface and will adjust their position based on the size of the device currently targeted in the visual designer. Guides can be added while creating screens for any type of application in Blend, including WPF, Windows Phone and Windows Store apps (XAML and HTML).

Adding a new guide to the design surface is accomplished by dragging the cursor from one of the rulers onto the page at the desired snap point. The location of the guide can be adjusted by either dragging it to a new location or by selecting it and changing the Point value near the top of the guide line. This is also where the guide can be tethered to either the left or right edge (or top/bottom for horizontal guides). By holding down <ALT> and dragging from an existing guide, a new guide is created that will adjust its position in alternate device sizes in relation to the original guide’s location instead of the edge of the page.

In the View menu, a Manage Guides submenu can be found. This contains menu items to:

  •     Save the current guides so they can be used on other screens instead of being created manually each time.
  •     Load a saved set of guides to the current page.
  •     Clear all the guides from the current page.

This screen illustrates the Guides concepts:

  1. The Load, Save and Clear menu items.
  2. Adjusting the location of the guide by entering the Point value.
  3. Guides placed on a page to aid in aligning visual elements.

 

The following collections of guides are available to be used while developing Windows Store apps. They can be found under <Visual Studio Install Folder>\Blend\Guides
Windows Store\ and have a file extension of .guides.

  •       BasicPage
  •       FileOpenPicker
  •       GroupDetailPage
  •       GroupedItemsPage
  •       HubPage
  •       ItemsPage
  •       SearchResultsPage
  •       SettingsFlyout
  •       ShareTargetPage
  •       SplitPage

By using these guides, developers can feel more confident that the apps they submit to the Store will comply with the Windows style guidelines. Take a look at the Hub Page, one of the new templates included with Blend for Visual Studio 2013, with the HubPage.guides collection loaded.

 

Device Support

The Device panel for Windows Store app development in Blend for Visual Studio 2013 has been updated to reflect the change in how apps can be resized and snapped in Windows 8.1.

 

Support for smaller display sizes has been added to the Display dropdown. Developers can now design for 7” Windows devices.

The Edge item allows for previewing how a page will look snapped to different edges. The available options are now Both, Left, Right or None. A Minimum Width can also be set to each page on the Device panel. Options for previewing the current page with different color options has been split into two settings, Theme and High Contrast.

Behaviors and Actions

XAML veterans will be happy to know that Behaviors are finally available for Windows Store apps. Those who are unfamiliar with using behaviors in other flavors of XAML, can get a quick overview on their use in WPF from MSDN here:

Add a behavior to an object (Blend SDK for WPF)

Creating custom behaviors (Blend SDK for WPF)

Behaviors and Actions can be added to an element on a page by selecting one from the Behaviors section of the Assets panel. Ten have been included for use with Windows Store apps.

  •          CallMethodAction
  •          ChangePropertyAction
  •          ControlStoryboardAction
  •          DataTriggerBehavior
  •          EventTriggerBehavior
  •          GoToStateAction
  •          IncrementalUpdateBehavior
  •          InvokeCommandAction
  •          NavigateToPageAction
  •          PlaySoundAction

Additional behaviors can be downloaded from the Visual Studio Samples Gallery, and developers can write their own custom behaviors. There are some greatonlineresources to help developers get started writing behaviors for their apps.

Templates and Controls

There is a new project template and several new item templates added to Blend for Visual Studio 2013. The Hub App project template is available for both XAML and HTML projects.

 

This new project type gets developers started with the new Hub control on its main page and also adds group details and item details pages. This new project template, and all of the existing templates have been updated to support dynamic resizing on different devices and other new guidelines for Windows 8.1 Store apps.

New item templates include the Hub Page, which is the centerpiece of the Hub App template just described, as well as a new Search Result page and SettingsFlyout. The SettingsFlyout item template is a host for the new SettingsFlyout control, which has a header with a back button, title text and a close button and a details section which contains a StackPanel with a couple of TextBlocks in its default state. This details section is where developers add all of the controls to update their app’s user settings.

 

Other new controls in Blend for Windows Store apps are:

  •          Hub – This new control solves the problem of pulling data into an app from different data sources. Each HubSection can have its own DataTemplate and data source.
  •          DatePicker / TimePicker – Two controls that were missing from the default tool set when building Windows 8 apps are now available for building Windows Store apps.
  •          CommandBar / AppBarButton – New controls that provide a really simple way to build out the buttons on your app’s Command Bar.

Other Enhancements

Styles and templates that live in other files can now be edited directly from the page where they are being used. Developers no longer lose the context in which they were working to jump back and forth, make changes, and check to see how their page(s) are affected by each change.

Sample Data can now be configured for Windows Store apps in design mode. The data can be set up by hand or generated from existing classes in the current project or in another referenced assembly. The data itself is saved as JSON. Developers can set the design-time data context on the Data Context pane of the Data panel.

All UIElement objects have a new CompositeMode property which is used for apps with mixed XAML/DirectX UIs.

HTML Windows Store apps have most of the same updates plus an added bonus feature. There is now an Animation panel, which provides a timeline control for creating CSS animations. The panel works just as it does for XAML animations and creates very clean CSS.

Of course the Blend team continues to squash UI bugs and improve the performance and reliability of the visual designers. I have not run any kind of metrics on it, but it feels like the number of times I needed to reload the designer has reduced greatly from previous versions of Blend.

If you are using an edition of Visual Studio 2013 that includes Blend, start using it today and reduce the time and effort it takes to get your apps into the Windows Store.

 About the author

 

Alvin is a Software Architect living near Philadelphia, PA. He has dedicated his 18 year career to building software with C#, Visual Studio, WPF, ASP.NET MVC, Windows Phone, Entity Framework and SQL Server. He has been awarded as a Microsoft MVP five times, once for Software Architecture in 2009 and the four subsequent years for Visual C#. You can read his daily links for .NET developers on his blog, Morning Dew, at alvinashcraft.com or follow him on Twitter (@alvinashcraft). He currently works for Genzeon Corporation, developing clinical health care software. He has previously been employed with several large software companies including Allscripts and Oracle. There he helped create software solutions for financial, business and health care organizations using Microsoft platforms and solutions.

 

Script References in Visual Studio 2013

$
0
0



Editor’s Note: In partnership with Microsoft Press, now celebrating their 30th year, MVPs have been contributing to an ongoing guest series on their official team blog. Today’s article is from ASP.NET/IIS MVP
James Chamberswhich is the 32nd in the series.

Some of the gems that make Visual Studio the best development environment are so humble that you’re likely to miss them. The updates to script references, in my opinion, fall into that category while still improving on an already great feature of the product: IntelliSense for JavaScript.

A Little Background on IntelliSense

IntelliSense has been featured for nearly two decades now in Visual Studio’s lineage, dating back to 1996. Developers have become so accustomed to using it that when you spend any time in a different development environment you immediately feel impaired.

 

Figure1.png

IntelliSense is great for learning APIs, object exploration and, when used well, can dramatically speed up the time it takes to write code. It also has the nice side benefit of allowing us to write appropriate and descriptive property and method names without having to worry about how long it would take later to access those members by typing them out L-E-T-T-E-R-B-Y-L-E-T-T-E-R.

When feature phones started introducing autocomplete, I was like, “Pfft! I’ve had autocomplete for years!”

IntelliSense for JavaScript

Let’s be honest here, the web’s kinda caught on now, almost to the point where it might stick around. More and more of the code we write has gotten closer to the client, especially in web development where we strive to create rich and interactive experiences for our users. It became increasingly painful to have to flip from a .Net language to JavaScript and lose our ability to navigate across objects. Where, in C#, we could dot-type into our objects, we’d always have to jump out to a separate JS file or spin up a browser to find out more about the methods in a JavaScript library.

Thankfully, Visual Studio 2008 introduced the first bits that really helped us gain some IntelliSense over our JavaScript, however the context was limited to pages and scripts where references were located, and the scripts they referenced and so on. This breaks down pretty quickly when you have partial pages that don’t reference the scripts you’re using or when you’re working on your own JS libraries that don’t immediately load a library but still depend on it (such as a jQuery plugin).

Visual Studio 2010 helped out by pseudo-executing our code, improving performance, and adding type inference on JavaScript objects and variables. Then Visual Studio 2012 came along and upped the game by introducing the Chakra runtime from IE, further improving performance and shipping with an editor that supported the _references.js global references file. 

So what was left for Visual Studio 2013?  To continue reading the full article click here

About the author

 

James Chambers is a husband, father, speaker, author and Senior Software Developer in Canada, where he once made a snowblower out of three electric toothbrushes, a shovel and duct tape, all while fighting off an angry pack of polar bears. His development passions are fueled by new tech, new tools and mentoring others. Outside of geeking, James is driven to help find a cure for Juvenile Diabetes, ensure children world-wide have access to clean drinking water and to find ways to better distribute the world's food supply. Follow his coding adventures and other exploits at http://jameschambers.com.

About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.


 

 

Viewing all 788 articles
Browse latest View live




Latest Images