Quantcast
Channel: The Microsoft MVP Award Program Blog
Viewing all 788 articles
Browse latest View live

Community Video Tips Campaign - Portugal


Share Your Video Tips!

$
0
0

We are celebrating all the ways current Microsoft consumer products, apps and features solve people’s real world needs by launching the Microsoft Community Video Tips campaign hosted by the Microsoft MVP program!

Enter by sending a link to your video tips between August 2nd and October 3rd!  

You don’t have to be a Microsoft MVP to submit your own videos!

The more video tips you create the more chances you have to win!

A summer of

WIN!


Two (2) Weekly Prizes
1600 Microsoft Points!

One(1) Grand Prize
Dell Vostro 1440
with Windows 7 & a Windows 8 upgrade code!

Get Started!
Step 1:

Upload your Microsoft video tip to YouTube

Step 2:

Tag your video with #mstips

Step 3:

Email your video tip link to:
mstips@microsoft.com

 

Xbox MVP William Devereux, Windows Expert Consumer MVP Shawn Keene and Xbox MVP Daylong Furlough share 3 great examples of Community Video Tip submissions

Enter video caption here
Enter video caption here

Friday Five–August 10, 2012

Friday Five–August 17, 2012

Wow! SkyDrive is Really Colorful!

$
0
0

Editor's Note: The following MVP Monday post is by PowerPoint MVP Glenna Shaw.

 

Wow! SkyDrive is Really Colorful!

Now how do I find my stuff?

SkyDrive has a whole new look and feel that’s colorful and attractive, but your first reaction seeing it may be the same as mine, “Wow, that’s cool, but how the heck do I navigate this, find my stuff and manage my permissions?”  Since I use SkyDrive to distribute my PowerPoint tutorials to the public, you can understand my concern.  However, once I got over my initial reaction I realized the new look is the same as SkyDrive on my Win7 phone and it’s actually quite easy. If you’re using a tablet, the new design is much easier to use than the traditional Windows Explorer layout.

Color and Structure have Meaning

You’ll notice that everything is now in color blocks.  Dark blue blocks are folders and will be listed first and display the number of files in the folder in the lower right corner.  Just click on the block to open the folder.

Filenames are displayed as color blocks that match the type of file, green for Excel, orange for PowerPoint, blue for Word, etc.  If your file isn’t an MS Office file the colors can be interesting, for example, orange-red for Adobe PDFs and gray for Zip files.  Each filename block displays the app icon in the lower left corner.  Just click on the filename block to open the file.

clip_image001

Managing Folders and Files

So how do you manage your files and folders?  The simplest method is just to right click the folder or filename color block and use the pop up.   You can see the Visualology folder is selected below, indicated by the checkmark in the box in the upper right of the color block.  Just click on the desired action in the pop up.  Alternatively, hover over a folder or filename and click the checkbox in the upper right to select it and use the actions on the toolbar at the top of the SkyDrive window.

clip_image002

Changing Permissions and Properties

Clicking Properties in the pop up shown above will open the Properties pane.  You can also click on the Properties pane icon in the upper right of the SkyDrive window.  Hover over a folder or filename and click the checkbox in the upper right to select it and change the permissions or other properties.

clip_image003

Navigating Folders

You’ve clicked on a folder and now you don’t know how to get back to your main folder? Just use the breadcrumbs at the top of the SkyDrive window.

clip_image004

Are You a Traditionalist?

Not comfortable with the new look?  No worries, just click on the Windows Explorer view icon in the upper right of the SkyDrive window and you’ll have the familiar Windows Explorer layout.

clip_image005

While the new look and feel can be a little intimidating, you can see it’s quite easily mastered with little effort.  If you are using a tablet, just touch the item where I’ve instructed to click on it.

 

About the Author

Glenna Shaw

Glenna Shaw is a Most Valued Professional (MVP) for PowerPoint and the owner of the PPT Magic Web site and the Visualology blog. She is a Project Management Professional (PMP) and holds certificates in Accessible Information Technology, Graphic Design, Cloud Computing and Professional Technical Writing.

MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager for Dynamics, Excel, Office 365, Platforms and SharePoint in the United States. She has been working with MVPs since her early days as Microsoft Exchange Support Engineer when MVPs would answer all the questions in the old newsgroups before she could get to them.

Friday Five–August 24, 2012

$
0
0

Microsoft Dynamics Sure Step Methodology Revisited

$
0
0

Editor's Note: The following MVP Monday post is by Dynamics CRM MVP Jerry Weinstock

 

Let’s look at some of the features of theMicrosoft Sure Step Methodology and how you can use them to improve the delivery of services to your Microsoft Dynamics clients.

In the last year, I have done a complete turn-around regarding my perception and use of the Sure Step Methodology. After reading this blog posting, I hope you will end up where I am now – an enthusiastic user of the product. When I first saw Sure Step several years ago at the Convergence event, I was initially very impressed and intrigued. However, after digging into it a little further I became disillusioned and ignored it for several ‘reasons’:

  1. I did not need any help. I have been doing CRM implementations since V1.2 came out in 2003 and I am good at it.
  2. Our clients would not pay for a project management line item.
  1. Sure Step was excessively complex for the size projects we do – it is only necessary for the “big boys”.
  1. The learning curve for working through the Sure Step application environment was too complex and not intuitive.

In the last year I have come around and learnt that those ‘reasons’ are almost entirely wrong!

 

clip_image001

 

Let’s get started with two takeaways regardless of your role:

  1. Dynamics Partners – if you are not using at least some of what Sure Step has to offer you are not operating as effectively as you could be. You do not need to map every step and use every template to receive benefits.
  1. CRM End Users – you should be asking your partner or prospective partners about how they will leverage Sure Step to get you to where you want to go.

The Sure Step Methodology will help your project be:

  1. More predictable
  1. On-time
  1. On Budget

It is easy to become overwhelmed by the methodology since so many resources are available. There are over 500 Word, Power Point, Excel and Visio files that are setup as templates that can be used to jump start your project planning, proposal development, and analysis preparation.

clip_image002

Decision Accelerators

Sure Step can be applied to any of the Dynamics ERP solutions or CRM. One very powerful component of the methodology are the Decision Accelerators.

  • Accelerated Proof of Concept with CRM Online: An accelerated process to validate specific customer relationship management scenarios on the Microsoft Dynamics CRM Online or On-Premise application.
  • Requirements and Process Review: Review current processes to determine the requirements for the future state.
  • Fit Gap and Solution Blueprint: Determine how effectively the customer’s requirements fit with the proposed Microsoft Dynamics solution.
  • Architecture Assessment: Confirm the architectural design and infrastructure needs for the Microsoft Dynamics solution.
  • Scoping Assessment: Estimate high-level cost and put a resource plan in place for the deployment of the Microsoft Dynamics solution.
  • Proof of Concept: Validate the customer’s requirements with their own sample data in a test environment.
  • Business Case: Prepare a solid business case for implementation of the Microsoft Dynamics solution.
  • Upgrade Assessment. Review current release of the Microsoft Dynamics to ascertain the best approach to upgrade to a subsequent release.

clip_image003

 

Perspective and Use Turn Around

Abandoning my original bah humbug attitude is easy now as I look at several of the resources we have successfully used in consulting engagements during the last year.

Fit Gap– We use this template regularly to report on our gap analysis engagements. Once we modified the template for Word and Excel with our branding and adjusted the outline for the size projects we generally implement it has significantly reduced the time it takes to create the customer deliverable. And in the spirit of judging a book by its cover I can tell you that the immediate reaction to the deliverable is that it is a very professional presentation of the findings- we have more credibility and the clarity of the presentation of the findings are improved. Additionally, since we have a consistent format we can spend more time on the findings and less time figuring how to format Word and/or Power Point.

clip_image004

Accelerated Proof of Concept – For a number of recent CRM implementations we have been contacted by the prospect because of our online digital marketing program. In many cases, we never meet the client in person, the CRM platform is CRM Online hosted by Microsoft so there is no infrastructure work to do on site and with the application in the Cloud, a new CRM environment can be spun up in 10 minutes. Consequently, clients expect a similar level of speed to success and a simplified acceptance process.

We have used the Accelerated Proof of Concept with CRM Online Decision Accelerator for all our CRM Online implementations. The components of the POC are:

  1. High-level requirements review
  2. High level Fit Gap Analysis
  3. High level Architecture Assessment
  4. Preliminary Business Value Assessment

At our company, we have morphed this into a rapid implementation process for those clients that will end up staying with CRM Online. The great thing about following this methodology is that you can map the process steps to the 30-day trial period for a CRM Online instance. Once the POC is accepted you can then move to the next Milestone after a Go-Live that tweaks the system to further map to the way the business works.

clip_image005

 

Onsite-Offshore Work Effort – Although this topic in the methodology focuses on understanding how to manage offshore development it can also be applied to any subcontracting environment. There are no specific templates that are associated with this resource – it utilizes resources in other sections. However, it can be a great way to start the thinking on how and who will be accountable for the split in the work breakdown schedule. We create our own version of the graphic for each project where we are subcontracting out some work. Then it is taped to the white board to make sure all involved continually are reminded of our primary role and who is doing what on the project.

Additionally, when we are brought in by other CRM partners to assist on their projects we use this internally to guide us during the subcontractor negotiations at a big picture level.

 

clip_image006

 

The Microsoft Dynamics Sure Step Methodology in part or in whole has something for everyone in Dynamics implementations. Partners can get started today by using this link to get to the main page inside of PartnerSource for Sure Step. https://mbs.microsoft.com/partnersource/partneressentials/serviceplans/surestep/

 

About the author:

Jerry Weinstock

Jerry Weinstock is the Business Development Manager at CRM Innovation. Jerry has been working with CRM since the 1.2 release in 2003. CRM Innovation builds Email, Marketing and Event Management solutions for Microsoft Dynamics CRM. Twitter: @crminnovation

MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager for Dynamics, Excel, Office 365, Platforms and SharePoint in the United States. She has been working with MVPs since her early days as Microsoft Exchange Support Engineer when MVPs would answer all the questions in the old newsgroups before she could get to them.

MVPs Showcase Their Skills, Win Awards at the Microsoft Worldwide Partner Conference

$
0
0

 

 

     It’s no surprise that MVPs were center stage at the Microsoft Worldwide Partner Conference this summer in Toronto, Canada. The conference recognizes achievements in solutions built on Microsoft technologies. Each year, the Microsoft Worldwide Partner Conference offers Microsoft partners the opportunity to vie for a variety of awards to showcase solutions built on Microsoft technologies.

clip_image002

     Many MVPs were among this year’s awardees including Office 365 MVP, Danny Burlage, whose team (pictured above) at Wortell received the prestigious Microsoft Office 365 Innovation Partner of the Year Award and the Microsoft Support Partner of the Year.

“Winning these awards at WPC is special. As an MVP and CTO I am very dedicated to providing great solutions around Microsoft technology,” says Burlage. 

His team created a total of 32 unique business scenarios to jumpstart customer productivity leveraging not only Office 365 but also SharePoint Online, Exchange Online and Lync Online.

     Their goal was to optimize business processes and enable their customers to fully understand how a cloud-based solution could effectively address their business challenges and better realize great productivity. Wortell’s Office 365 innovations and services have been shown to significantly reduce costs and increase productivity while improving the customer experience. 

 

“Winning these awards creates a huge sense of pride for the Wortell team,” says Brulage.

 

     An impressive list of MVPs were involved with this year’s WPC event including: Office 365 MVPs Loryan Strant, Jethro Seghers, and Kerstin Rachfahl, Windows MVP Mitch Garvis and SharePoint MVP Corey Roth.  They were on hand to answer questions, provide technical expertise and discuss new trends. 

     Captured here, Office 365 MVP Kerstin Rachfahl highlights the benefits of using Office 365 for professional services customers.  clip_image004

     The Office 365 team worked diligently with MVPs and the WPC to create an event that would provide a relevant exchange of technical expertise, demonstrate new trends and give MVPs an opportunity to share their expertise with other industry leaders.

     This year’s WPC winners have been invited to appear onstage at the 2013 Microsoft Worldwide Partner Conference in Houston to be recognized by their peers, as well as to an exclusive celebration during the conference.

For a complete list of WPC awardees and to learn more about the WPC program, go to www.digitalwpc.com


Friday Five–August 31, 2012

MVP Monday - Five Reasons you Should Consider DirectAccess

$
0
0

Editor's Note: The following MVP Monday post is by ForeFront MVP Jordan Krause and was originally published Feb 13, 2012

I have been working with Microsoft DirectAccess for about two years now, and I typically find myself writing or speaking about a deep-dive description of “this” or a technical write-up of “that”. Today I wanted to take a step back and cover DirectAccess at a higher level, both because there are some real world scenarios that anyone, not only the network security team, would be interested in hearing about, and also because speaking with new individuals and organizations almost daily over the past two years has brought me to realize that the majority of the IT population is still unaware of this amazing new technology. So here’s to spreading the word…

1. Userless VPN

I almost titled this one “Users will consider you a hero” but it looked silly on paper. Apparently not too silly as I just typed it anyway. Think of DirectAccess as a completely automatic VPN connection. Around the office here, we like to call it “userless”. A DirectAccess laptop is connected to the corporate network automatically, without user input, the moment that it receives internet connectivity. One of the reasons that I love working with DA so much is the feedback I receive from, well, everyone. Users love it because their workflow processes are exactly the same whether they are sitting in the office or sitting in a coffee shop, IT loves it because those laptops are always available and managed (more on that later), and executives love it not only for their own use, but also because of the reduced helpdesk costs that it brings to the table (also more on this later).

2. Reduced support costs - ROI

In the majority of my implementations, a reduction in support and helpdesk costs is a bonus side-effect that is often not realized until months after the rollout of DirectAccess. In most companies, a high percentage of helpdesk calls are from remote users struggling with a VPN connection. Here are some of the things you will no longer need to worry about:
Forgotten passwords – There aren’t many good options for an employee who has forgotten their password and isn’t going to be back in the office in the near future. Nor for a user who reset an expired password on their desktop at the office, only to find out that this password change was not reflected on their laptop that they are now trying to use from home. IF you can get logged into the laptop with an old cached password you stand a decent chance at getting this situation straightened out, though it’s still going to be a headache and time consuming for the helpdesk. On the other hand, I have seen far too many cases where the password was forgotten and the only recourse is for the helpdesk to reset the password in Active Directory. In this situation, until that laptop is plugged back into the corporate network the only purpose it’s going to serve is to emit a friendly glow while it sits on the login screen. As you may have guessed by now, these problems are non-existent on a DirectAccess laptop. When the helpdesk resets a password in Active Directory, that new password is available for the user to type into their login screen in real-time. The user can literally call the helpdesk – “I forgot my password”, helpdesk resets password, user logs in with new password, and be off the phone in less than a minute.
Port restricted firewalls – We have all been in a hotel room or connected to a public WiFi only to discover that we have internet access, but our VPN will not connect. I won’t get into the technical nitty-gritty here, but will simply state that DirectAccess is able to work around these kinds of firewalls that prohibit traditional VPNs from connecting.
VPN software not working – Having VPN means you have a VPN software that is installed on the client computer. Sometimes software breaks, it’s inevitable. DirectAccess has no client software. The componentry for DA is baked right into the Windows 7 operating system. There’s nothing to install, nothing to break, and therefore nothing to worry about.

3. “Always-on” access for management and patching of your remote devices

Many of you probably realized this benefit after reading above about the always-on user experience. A seamless, self-connecting tunnel to the corporate network not only enables users to have a continuous connection to the network, but also allows the network to have a continuous connection to the laptops. Even before the user authenticates to the machine, as soon as that machine gets internet access an IPsec tunnel is established that we like to call the “Management Tunnel” or “Infrastructure Tunnel”. This means that if the device is turned on and has an internet connection, even if still sitting at the login screen, the IT department and management servers have the ability to push patches, push SCCM, push Group Policy objects, and even remotely control that remote computer from the corporate network. There’s no more waiting around for users to connect their VPN before patches and antivirus definition files can be updated, with the implementation of DirectAccess organizations see patch application rates immediately skyrocket. This always-on management capability is actually the sole reason that many of the customers I work with decide to use DirectAccess. While they all have plans to move to the “two way street” with DirectAccess enabling the users to access applications in the future, for the present time they may be happy with whatever remote access solution they currently have and instead of scrambling to train all of the users on something new, DirectAccess is being implemented as a “one way street” only allowing this management access and using it only for the continuous updating of their remote devices. Even in this limited one-way street/manage-only kind of installation, you still get the password reset benefits that I mentioned earlier.

4. The Branch Office Scenario

Now that you have a grasp on what DirectAccess is and how it could benefit both your remote users and your management systems, let’s expand the playing field a little. In most cases when referencing a “DirectAccess client computer” we are talking about a laptop that is roaming the earth, connecting back to the corporate datacenter automatically whenever that machine gets an internet connection. Another less obvious way to gain benefit from DirectAccess as a technology is what I call the Branch Office Scenario. Many, many companies have multiple physical locations. There is commonly a main office and one or more branch offices which contain a lesser number of personnel. I speak with companies all the time who have branch offices all around the country or the world, and in most cases these branch offices are connecting back to the main office by either a semi-finicky site-to-site VPN, or by an expensive MPLS circuit. I used to work for such a company where we had hundreds of offices, many of them with only 2-5 people, and each had a dedicated frame relay circuit that was a lot of money for very little bandwidth. The monthly cost combined with the equipment cost and the stack of networking equipment piled up in the corner of these mostly single-room offices made the whole thing seem silly at times. How would you like to dump all of those expensive lines for regular internet connections? Enter DirectAccess. With DA running in your main office, you can trade in the dedicated circuits in these remote sites for regular internet connections, giving you much more bandwidth for a fraction of the cost. Then, provided your computers in that remote office are Windows 7, you simply make those computers DirectAccess connected computers and voila, they are all connected back to the corporate datacenter over secure IPsec tunnels 24x7x365. What about that local file server that might be sitting in one of your larger remote offices? Got that covered as well. Not only can Windows 7 operating systems run DirectAccess, but a Server 2008 R2 can also be a DA client and connect seamlessly back to the corporate network.

5. You already have it, why not start using it?!

I don’t want this to be misleading, you do not currently own EVERYTHING that you need to turn DirectAccess on, but if you have already accomplished or are planning to accomplish a Windows 7 rollout like so many companies are right now, you are awfully close. As stated earlier, there is no client software that needs to be installed to run DirectAccess. All of the components necessary to run this technology are baked right into the operating system of Windows 7 Enterprise, Windows 7 Ultimate, or Server 2008 R2. All you need is the DirectAccess “gateway” for which you have a number of options. There are two different flavors of DirectAccess today. The first is native DA for which you only need a simple Server 2008 R2 server in your network to be the gateway. Native DirectAccess comes with some particular requirements and limitations that make it harder to justify, like needing IPv6 inside your network and requiring all of your application servers to be Server 2008 R2. However, by far the more common flavor of DirectAccess is that provided by Microsoft’s Unified Access Gateway (UAG) platform. UAG is available as a software that you can install on your own Server 2008 R2 box, or available from Microsoft OEM system builders as specialized, hardened turn-key networking appliances. UAG brings so many advantages to the table, I will list just a few of them here. When running UAG for DirectAccess:
No IPv6 requirements – The need for IPv6 and all Server 2008 R2 inside your network goes out the window. IPv6 is still an integral part to the way that DirectAccess works, but UAG contains translation technologies known as NAT64/DNS64 that will make all of theappropriate translations for you so that you don’t need to change your internal infrastructure to take advantage of DirectAccess. In fact, I have a demo environment running an IPv4-only network and Server 2003 application servers (not the UAG gateway, that is a DirectAccess Concentrator appliance built by IVO Networks) and running Active Directory 2000 and everything works perfectly.
Array and load balancing capabilities – Native DirectAccess does not provide you a way to run multiple gateways for redundancy. UAG provides the ability to join multiple gateways together in configuration arrays so that you need not make changes on each gateway individually, and also provides a Network Load Balancing mechanism that allows you to join multiple gateways together in active/active for both growth and redundancy purposes.
Security – Native DirectAccess means plugging a regular, general purpose server into the edge of your network. UAG contains Threat Management Gateway, Microsoft’s robust firewall software so that your gateway (and everything behind it) is protected from the www.
Web portals – UAG is not only an engine for DirectAccess, but also contains full-fledged SSLVPN functionality. With UAG you can simultaneously provide a DirectAccess entrypoint, and one or more web portals that can provide browser-based access to applications and even full SSLVPN connectivity at the same time. Maybe one of your employees has a DirectAccess laptop but left it at the office and needs to check email or pull a document out of SharePoint from home. With a UAG portal running you have a secure entry-point that they can jump into and grab what they need even without their corporate machine handy. The technical capabilities of UAG can (and have) filled a book, so I will leave it at this for now – UAG is designed to be a one stop shop for remote access. In many cases an implementation of UAG/DirectAccess on a single appliance (or array of appliances) equates to shutting down multiple vendor remote access solutions such as VPN, SSLVPN, virtual desktop solutions, etc. Consolidation of remote access makes life easier for the users, cuts down on administration time, and is good for the budget.

So there you have it, my summary of what I believe to be the future of remote access. I am fortunate enough to be immersed in these technologies daily so if you have any questions, or if there are any particular areas of DirectAccess that you would like to see expanded upon in subsequent articles, please feel free to reach out to me.

Author's Bio

Jordan Krause is a Microsoft Forefront MVP and enjoys working “on the edge”. As a Senior Engineer at IVO Networks he spends most of his days designing and implementing the integration of Forefront technologies for enterprises around the world. Jordan’s primary focuses are Unified Access Gateway and Threat Management Gateway, his favorite technology without a doubt being DirectAccess provided by UAG. Committed to continuous learning, Jordan holds multiple certifications including Microsoft Certified IT Professional in Enterprise Administration (MCITP: EA). He posts Forefront related articles and tech notes on the following page: http://www.ivonetworks.com/news/ and can be found via Twitter @jokra.


MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager for Dynamics, Excel, Office 365, Platforms and SharePoint in the United States. She has been working with MVPs since her early days as Microsoft Exchange Support Engineer when MVPs would answer all the questions in the old newsgroups before she could get to them

MVP Award Program, Microsoft, MVPs, MVP Monday, Most Valuable Professional, Melissa Travers, ForeFront MVP, Jordan Krause Edit tags

Automated Build-Deploy-Test using TFS 2012

$
0
0

Editor’s note:  The following post was written by Visual Studio ALM MVP Anuj Chaudhary

Overview

The automated build, deploy and test mechanism in TFS 2012 provides the ability to perform automated build, deploy and test on demand. A daily build could be scheduled which builds the latest code, deploys it to an environment and runs the tests on it. The build can be scheduled to run on a daily basis or on every check in to make sure that we test often and test early.

If you are familiar with the Automated Build-Deploy-Test workflow in TFS 2010, you had to use a dedicated SCVMM which was assigned to a Team Project Collection. To know more, please refer http://blogs.msdn.com/b/mvpawardprogram/archive/2011/10/17/automated-build-deploy-test-using-tfs-2010.aspx

You can create the same environment in TFS 2012 which is called “SCVMM Environment”

However, with TFS 2012 a new category of Lab Environment is introduced called "Standard Environment".

Unlike TFS 2010, with Standard Environment you don’t need to have your VM's configured with a specific SCVMM to run Build-Deploy-Test workflow. You can deploy builds on this environment and run tests on it. However, you cannot remotely start, stop or take snapshots of this environment.

Also unlike TFS 2010, you don’t need 3 different agents to be installed on the machines in the environment. You can just install the test agent on the machines or it will be installed for you as part of creating the environment.

Standard Environment Architecture

 

clip_image001

The architecture comprises of the components listed below.

Team Foundation Server

Team Foundation Server (TFS) provides the team the capability to coordinate and integrate their efforts. It provides the following:

    • Source control mechanism

    • Creating work items (User Story, Feature, Test case, Bug, etc.)

    • Build, Deployment and Test mechanism

    • Reporting mechanism

    • SharePoint Integration

clip_image002

Build Controller

It is a background process that manages a set of build agents.

Build Agent

It is a background process that receives build commands, runs the build and reports the results. Multiple build agents are managed by a build controller.

clip_image003

Test Controller

It is a background process that manages a set of machines which have the test agents installed.

clip_image004

Test Agent

It is a background process that receives test run commands, runs the tests and reports the results. Multiple test agents are managed by a test controller.

clip_image005

Test Manager

Test Manager is used to create test plans, tests suites, assign test cases and run tests. Multiple test runs can be managed and analyzed in the Test Manager.

clip_image006

Lab Manager

Lab Manager is used to create and manage environments. These environments are used to deploy build and run tests.

clip_image007

Standard Environment Workflow

The below diagram describes the step by step process to automate the Build-Deploy-Test workflow using TFS 2010.

clip_image008

Build-Deploy-Test Workflow Output

The below figure shows the output of the Build-Deploy-Test workflow

clip_image009

Conclusion

TFS 2012 Build-Deploy-Test mechanism provides the ability to build, deploy and test each and every build in an automated fashion to make sure that regressions are caught in time. Due to the integrated Reporting capability, the management team is always aware of the current status of the product. Thus the team achieves the goal of rapid delivery along with maintaining the quality of the product.

 

About the author
Anuj

Primarily focused on Software Testing and Automation. Extensive working experience on Manual testing, Automated testing, Performance testing and White Box testing. Contributing in Visual Studio ALM community and specifically in testing areas which involve Visual Studio Unit tests, Web tests, Load tests, Coded UI, MTM and Lab Management.

Working with Multiple Solution Explorer Windows in Visual Studio 2012

$
0
0

Editor’s note:  The following post was written by Visual Basic MVP Deborah Kurata

How many monitors can fit on your desk? As many as possible! It’s great that Visual Studio allows you to view your source code on both monitors. Visual Studio 2012 has enhanced its support of multiple monitors by allowing you to display multiple copies of Solution Explorer.

Open one of your solutions in Visual Studio 2012. Then open several files. Drag several tabs over to your second monitor to build a “raft”of tabs. If you want to see the same code file on both monitors, click the tab and select Window | New Window from the menu to create a second copy of the file. Then drag one copy to your second monitor.

The result is something like Figures 1 and 2.

 

clip_image002

 

Figure 1. Visual Studio 2012 on the Main Monitor.

 

clip_image004

 

Figure 2. Visual Studio 2012 on the Second Monitor.

 

Visual Studio 2012 allows you to take this one step further. You can create a copy of Solution Explorer and place it in the “raft” of tabs on your second monitor. Just select your solution or a project in Solution Explorer and select New Solution Explorer View from the context menu as shown in Figure 3. This creates another copy of Solution Explorer.

 

 

clip_image006

 

Figure 3. Visual Studio 2012 Solution Explorer Context Menu

 

You can drag the copy of Solution Explorer to your second monitor and attach it to your code file tabs as shown in Figure 4. Both Solution Explorer windows are independent, so you can scope one to your Views and another to your Models. Or you can filter one to all files that contain“Customer” and one to all files that contain “Invoice”. You can set them up as you need for your particular coding session.

 

Figure 4. Visual Studio 2012 Second Monitor with a Second Solution Explorer.

 

clip_image008

 

This ability to have independent Solution Explorer windows on multiple monitors gives you flexibility and aids in your productivity with Visual Studio 2012.

 

Come visit my blog for more Visual Studio and .NET tips and code examples: http://msmvps.com/blogs/deborahk/ 

Enjoy!

 

About the author

DKurataPhoto

Deborah Kurata is a software developer, independent consultant, and Microsoft MVP. She has authored several books including the “Doing Objects”series and is a speaker at conferences such as VSLive and TechEd. She is also the co-chair of East Bay.NET, a .NET developers group that meets in Berkeley.

Shim Saves The Day in A Tricky Unit Test Situation

$
0
0

Editor’s note:  The following post was written by Visual C# MVP William Wegerson

This is a story that actually happened to me and it illustrates how I could have used Visual Studio 2012 testing framework of fakes and shims to overcome an insurmountable testing failure that transpired at that time.

It all happened on a contract that I took at a financial company. They had an existing system going all the way back to VB6 and were just now (then) sprinkling in WPF into their existing system. They were unit testing freaks and by God or Buddha there had to be a unit test for every piece of code owned and even some that they didn’t own. If the blanket of code coverage didn’t safely tuck in the crying assemblies at night with a nice pacifier the project lead would wrap my knuckles like mother superior over a biblical infraction on the school yard. In retrospect I am linked-in to him this day so this story doesn’t turn out half bad.

Ok you get the idea of the situation, they mocked the heck out of all the code with a nameless but powerful mocking tool. But of course my code which I had to mock had a twist in it that at the time as mentioned could not be worked out. The originating assembly which my code du jour depended on at one point would internally create a new instance of some unholy object.

Since this was a true object and not an interface it couldn’t be swapped out with a test mock due to its being buried deep, deep into the code which my code called. Of course the usage of the object had dire consequences that basically made any unit test untenable to the mocking framework.Yes I was being mocked and my knuckles were rapped by the lead anyway out of pure frustration and the code I was writing simply did not have a unit test for it.

If we would have had the Ultimate version of Visual Studio 2012 which has the fakes baked in, we could have worked around the instance creation by creating a shim. I will not get into a long example which mimicked what I saw, for you can make that leap of faith by looking at this simple code I have shimmed up.


Below are the steps needed to Shim up DateTime.Now which will represent an assembly which can’t be stubbed/mocked via an interface.

1.      In your C# unit test project right click on the System reference and choose “Add Fakes Assembly”

2.      Create a unit test and that unit test file add these usings:

using System.Fakes;

using Microsoft.QualityTools.Testing.Fakes;

3.      Here is the Example unit test code:

 

[Description( "Unit test" )]
public virtual void TestShim()
{

    using  (ShimsContext.Create())
   {

          ShimDateTime.NowGet = () => new DateTime( 2012, 1, 1 );

          var fakeTime = DateTime.Now; // It is always midnight on Jan 1st 2012 

    }

 

Explanation

 

The usingcreates a scope where all things contained are dictated by any shim we may specify. Meaning that if any other code happens to fall within that context and calls DateTime.Now that code will, by design or not, get the same date regardless of the number of times called.

 

The delegate passed into the NowGet of the ShimDateTime loads our fake friend of a time which will always return that specific datetime of January 1, 2012.

 

Finally we call the actual DateTime.Now which has been supplanted by our shim and always returns that date mentioned.

 

Summary: That way regardless of when we run this test it always returns the same date. In this example we directly call DateTime.Now, but if we had code that also called the method Now elsewhere not seen by the unit test, it would also receive the same answer.


If I could have used a Visual Studio 2012 shim, I could have created a fake object to be returned during my contract and avoided the knuckle bruising from the lead. This is a perfect example of when to use it, but if you can use stubs, those are better, but if you are in a jam…the shim is your non fake friend.

 

About the author

JustMe

William Wegerson is a C# Architect and Developer who has been working in Denver CO USA for over 20 years. He enjoys interacting with the community by being the featured speaker at groups along the front range such as the Denver Visual Studio User Group on a wide range of topics, including Silverlight and also showing the new features of Visual Studio 2010 and two years later on VS 2012.  William has been awarded Microsoft MVP status in C# since 2008, due to his community work from speaking engagements, his blog (OmegaCoder.Com), as well as his work on the MSDN forums and StackOverflow.

Caller Details

$
0
0


Editor's note:  The following blog post was written by Visual C# MVP Dustin Davis

There is no lack of new features in Visual Studio 2012. But even with all the bells, whistles and menu titles that yell at me, I wanted to look for meat closer to the bone. What I found were some neat additions to help with tracing and debugging.

With Visual Studio 2012 we get three new attributes from the System.Runtime.CompilerServices  namespace, CallerMemberNameAttribute, CallerFilePathAttribute and CallerLineNumberAttribute. Using these attributes we can collect certain information about the code execution without having to do any additional work. As much as I love using reflection, these attributes are a welcome addition to my tool belt.

How do we use them?

Their use is simple. They’re applied to optional method parameters. That’s it. At compile time the compiler takes over and automatically resolves the correct value and passes it at the point of invocation.

 

void Method(

    [CallerMemberName] string callerName = ""

    ,[CallerLineNumber] int lineNumber = -1

    ,[CallerFilePathAttribute] string filePath = ""

)

Getting trace data

Trace data. We all need it. How we get it is the issue. When logging an execution trace we usually need caller details, parent types, line numbers, etc. For most of us reflection is the go to solution. It’s much more dynamic and flexible than hard coding values and is easier to maintain in the long run. 

You’ve probably seen or written a trace method like

void Trace(string message, string methodName)

{

    Trace.Write(string.Format("{0}: {1}", methodName, message));

}

And then invoked it using a hard coded value for the method name or using reflection

Trace("Executing", "MyMethod");

Trace("Executing", System.Reflection.MethodBase.GetCurrentMethod().Name);

Or maybe you’ve written some more common like the following

void Trace(string message)

{

    StackTrace stackTrace = new StackTrace();

    StackFrame lastFrame = stackTrace.GetFrame(1);

    string methodName = lastFrame.GetMethod().Name;

 

    Trace.Write(string.Format("{0}: {1}", methodName, message));

}

which uses reflection via StackTrace to get the details of the caller. But with the new attributes, we have another option of writing that same method

void Trace(string message, [CallerMemberName] string methodName = "")

{

    Trace.Write(string.Format("{0}: {1}", methodName, message));

}

We can invoke this version with just our trace message.

Trace("Executing");

 

At compile time the compiler will auto populate the methodName parameter with a value equal to the calling method’s name. If you look at the IL you can see exactly what it’s doing.

.method private hidebysig 

    instance void MyMethod () cil managed 

{

    // Method begins at RVA 0x206f

    // Code size 17 (0x11)

    .maxstack 8

 

    IL_0000: ldarg.0

    IL_0001: ldstr "Executing"

    IL_0006: ldstr "MyMethod"

    IL_000b: call instance void ConsoleApplication1.Program::Trace(string, string)

    IL_0010: ret

// end of method Program::MyMethod

The compiler is loading the string “MyMethod” as a parameter for the call to the Trace method.

Do we really need another solution?

If we have a solution, and it’s been working for us for so long why the need for something else? Good question! Reflection is not known for performance. To get something as simple as the calling method’s name or line number, reflection is the proverbial sledge hammer.

There is also another issue that you might not think about initially until it bites you. When the assembly goes through obfuscation, the method names will become a cryptic mess of characters. Suddenly, “MyMethod” becomes “xEdjf3d7ldk” so when you go to look at your trace log you’ll see,

xEdjf3d7ldk: Executing

il2dur8dcpw: Executing

kf1ur8ldk83: Executing

Mapping this method name back to the original method would require some work.

The compiler will handle the attributes at compile time and obfuscation doesn’t occur until after the assembly is compiled. This means the information you get from the attributes will match what you have in your source code.

Using attributes where possible cleans up the code and makes it easier to read. When developers look at it, they won’t have to decrypt any reflection code to figure it out its intended purpose is.

Additionally, the attributes work the same in Release mode as they do in Debug mode and don’t need PDBs.

Not quite there yet

While these new attributes are handy, the information they provide is limited. We only get the three attributes for caller name, line number and file path. Since most of us would not add a trace method to every class we build, just having the method name isn’t always going to be enough detail to be useful, but we don’t get access to the containing type, method parameters or any of the other information that we could easily get using reflection. But this is a good first step.

Other uses

Are there other uses for these attributes beyond tracing and debugging? There are! My favorite use is when implementing the INotifyPropertyChanged interface which requires magic strings for passing the property names.

If you’ve seen my blog or been to my code camp sessions, you may have seen that I have many alternatives for implementing INotifyPropertyChanged including using aspects (Aspect Oriented Programming), and a T4 template (code generation). Anything that makes implementing this interface easier is awesome in my book.

public class DataModel : INotifyPropertyChanged

{

    private int _myProperty;

    public int MyProperty

    {

        get { return _myProperty; }

        set { _myProperty = value; OnPropertyChanged("MyProperty"); }

    }

 

    public event PropertyChangedEventHandler PropertyChanged;

 

    protected void OnPropertyChanged(string propertyName)

    {

        if (PropertyChanged != null)

        {

            PropertyChanged.Invoke(this, new PropertyChangedEventArgs(propertyName));

        }

    }

}

Not only do we have to write all of that boiler plate code in the property setter, but we have to maintain it too! I don’t think I need to tell you about the problems with magic strings. This is also prone to error with obfuscation (yes, I know you wouldn’t obfuscate a model you’re binding to).

With a quick change, we can avoid having to deal with magic strings.

public class DataModel : INotifyPropertyChanged

{

    private int _myProperty;

    public int MyProperty

    {

        get { return _myProperty; }

        set { _myProperty = value; OnPropertyChanged(); }

    }

 

    public event PropertyChangedEventHandler PropertyChanged;

 

    protected void OnPropertyChanged([CallerMemberName] string propertyName = "")

    {

        if (PropertyChanged != null)

        {

            PropertyChanged.Invoke(this, new PropertyChangedEventArgs(propertyName));

        }

    }

}

The CallerMemberNameAttribute works with not only methods, but properties and events too. This lets us update the INotifyPropertyChanged implementation to make use of the CallerMemberNameAttribute to provide the name of the changed property to the OnPropertyChanged method.

Conclusion

As an advocate of Aspect Oriented Programming and meta programming, I’m happy to see more native features move us closer to those methodologies. I’m not sure where Microsoft is headed with these types of features, but I feel that we could have gotten more from this release. I believe it’s going in the right direction and I’m excited to see how this grows. Maybe a way for developers to provide their own compile time transformations?

For more details, see the MSDN article: http://msdn.microsoft.com/en-us/library/hh534540.aspx

 

About the author

Dustin can be found on the road less traveled, avoiding what's popular. He's a co-host on the
MashThis.IO podcast, and a contributor to Pluralsight. He regularly attends user
groups, code camps and other developer events to speak about aspect oriented
programming and a range of other topics. When he isn't working or speaking at
events, he is preparing for his next project or speaking engagement. 

Working with Multiple Solution Explorer Windows in Visual Studio 2012

$
0
0

Editor’s note:  The following post was written by Visual Basic MVP Deborah Kurata

How many monitors can fit on your desk? As many as possible! It’s great that Visual Studio allows you to view your source code on both monitors. Visual Studio 2012 has enhanced its support of multiple monitors by allowing you to display multiple copies of Solution Explorer.

Open one of your solutions in Visual Studio 2012. Then open several files. Drag several tabs over to your second monitor to build a “raft”of tabs. If you want to see the same code file on both monitors, click the tab and select Window | New Window from the menu to create a second copy of the file. Then drag one copy to your second monitor.

The result is something like Figures 1 and 2.

 

clip_image002

 

Figure 1. Visual Studio 2012 on the Main Monitor.

 

clip_image004

 

Figure 2. Visual Studio 2012 on the Second Monitor.

 

Visual Studio 2012 allows you to take this one step further. You can create a copy of Solution Explorer and place it in the “raft” of tabs on your second monitor. Just select your solution or a project in Solution Explorer and select New Solution Explorer View from the context menu as shown in Figure 3. This creates another copy of Solution Explorer.

 

 

clip_image006

 

Figure 3. Visual Studio 2012 Solution Explorer Context Menu

 

You can drag the copy of Solution Explorer to your second monitor and attach it to your code file tabs as shown in Figure 4. Both Solution Explorer windows are independent, so you can scope one to your Views and another to your Models. Or you can filter one to all files that contain“Customer” and one to all files that contain “Invoice”. You can set them up as you need for your particular coding session.

 

Figure 4. Visual Studio 2012 Second Monitor with a Second Solution Explorer.

 

clip_image008

 

This ability to have independent Solution Explorer windows on multiple monitors gives you flexibility and aids in your productivity with Visual Studio 2012.

 

Come visit my blog for more Visual Studio and .NET tips and code examples: http://msmvps.com/blogs/deborahk/ 

Enjoy!

 

About the author

DKurataPhoto

Deborah Kurata is a software developer, independent consultant, and Microsoft MVP. She has authored several books including the “Doing Objects”series and is a speaker at conferences such as VSLive and TechEd. She is also the co-chair of East Bay.NET, a .NET developers group that meets in Berkeley.


Friday Five–September 7, 2012

$
0
0

1. Creating Strongly-Typed C# Event Tracing for Windows (ETW) Assemblies with Visual Studio

By Biz Talk MVP Maxime Labelle -

2.  Parallelize ProcessAdd with AMO

By SQL Server MVP Marco Russo - @marcorus

3. Create App Package for Windows Store

By Connected System Developer MVP Dhananjay Kumar - @debug_mode

4. Edit & Compile Team Foundation Server 2012 (TFS2012) Builds with Custom Build Activities

By Visual Studio ALM MVP Benjamin Day - @benday

5. Why You Should Upgrade Your PC to Windows 8

By Windows Expert-Consumer MVP Mike Halsey - @halseymike

New Code Review feature in Visual Studio 2012

$
0
0

Editor’s note:  The following MVP Monday post was written by Visual Studio ALM MVP Esteban Garcia

Code reviews are a critical part of software development. Not only do they help you keep you defect-count down, they are also a great way to learn from other people's code.   Code reviews also allow teams to communicate changes to the application with their peers.

There have been a few different ways to incorporate code reviews into Visual Studio in the past, including some third party components.  For the first time, Visual Studio 2012 includes a Code Review process out of the box by leveraging the Team Foundation Server Work Item Tracking system along with the use of shelvesets.  All the information is easily accessible through the redesigned Team Explorer window and Team Foundation Server Web Access. This allows code reviewers to compare code files, annotate them, and send comments back to the requester.

 

This process is supported through the use of the Work Item tracking system in TFS.  Two new Work Item Types were introduced:

-          Code review request: when the code review is created, it goes into "requested" state, and when the process is complete is closed.

-          Code review response:  It gets created when the code review request goes out.  If the code reviewer accepts the request it moves to accepted, and when the reviewer completes his or her work, it gets moved to closed.

The fact that Work Items are used opens up a lot of possibilities from a traceability perspective since we now can not only trace the coding task that is associated to a code check-in, but we can also associate all related code reviews.

 

Now let’s take a look at how this is implemented in Visual Studio 2012.  I used the light and dark color schemes to differentiate between the person requesting the code review and the person reviewing the code. 

When developers complete coding some functionality, they can click on the “Request code review” link on the Home tab or the My Work tab in Team Explorer:

clip_image001  clip_image003

That brings you to the “New Code Review” dialog where you can enter the following information:

-          Name of one or more code reviewer

-          Code Review subject

-          Area Path

-          Code Review Description

You can also list any related work items to the code review.  By default, the work items that show up as“In Progress” under the “My Work” tab will show up in this list:

clip_image005

 

Once the code review is requested, all team members that were entered as reviewers receive an alert.  The code reviewer can see all the information about the code review in Team Explorer:

-          Who requested it?

-          What team members were added as reviewers?

-          Related Work Items

-          Comments

-          Changed files

clip_image007

The reviewer first signals that he or she is working on the review by clicking on the “Accept” link.  This tells other reviewers that the review is already in progress.  Then, the reviewer clicks on each changed file and the file comparison window comes up:

clip_image009

 

The reviewer can annotate this comparison or can add an overall comment from Team Explorer:

clip_image011

 

All comments are stored with the code review and sent back to the developer that requested the code review.  Once the review is complete, the reviewer marks the review as complete by giving it a status of“Looks Good”, “With Comments” or “Needs Work”.

clip_image013

 

The person that requested the review can now see the Code Review Request work item along with any information entered by the reviewer:

clip_image015

 

Once the changes are checked in, as you can see here, the Code Review request (ID 1619) is associated to the changeset.

clip_image017

 

About the author

Esteban

Esteban Garcia is a Solutions Architect and ALM Consultant at AgileThought, Microsoft Visual Studio ALM MVP, ALM Ranger, Telerik Insider, and president of the Orlando .NET User Group. He has over 13 years of professional experience in software development, working in all phases of distributed object oriented enterprise application development and design using Microsoft technologies. Esteban excels in architecture and best practices and is passionate about Scrum and SDLC improvements using Visual Studio and TFS. Esteban also has a BS degree in Computer Engineering from the University of Central Florida.

Twitter: @EstebanFGarcia

Blog: http://estebanfg.blogspot.com/

 

MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager for Dynamics, Excel, Office 365, Platforms and SharePoint in the United States. She has been working with MVPs since her early days as Microsoft Exchange Support Engineer when MVPs would answer all the questions in the old newsgroups before she could get to them

Building a Windows 8 Touch application with WinRT JavaScript and Html5

$
0
0

Editor’s note:  The following post was written by ASP.NET MVP Peter Kellner

Introduction

With the introduction of WinRT (which replaces the Win32 API we are all use to), Microsoft has provided JavaScript programmers first class access to system libraries previously unavailable. That is, because WinRT is available directly to JavaScript, device access (GPS, motion sensors, etc.) are all available directly to the application with no special security layer. That said, WinRT also brings with it restrictions which a typical browser does not have. Because it is possible with such tight access to the OS for applications to do harm, WinRT has a sophisticated security model in place to keep bad things from happening. In addition, a WinRT app forces you application to behave nicely or it may not work. For example, if you application takes too long to start, the WinRT library will automatically stop the application from proceeding.

Microsoft has done a good job of balancing the needs of many with the needs of a few with WinRT. That is, your application is really the needs of a few (well, you alone), while the needs of many (all the other applications and services running on your device) are all looked out for.

Our Goal Today

We will be using Visual Studio 2012’s support for building a WinRT application using JavaScript and HTML5. The application will be a simple conference track viewer that lets its users navigate tracks and the associated speakers. This data will be downloaded asynchronously from a remote server in JSON.

Windows 8 application development has lots of special capabilities for which we will only scratch the surface. Our example will implement the basic search contract and provide a simple about box for the application on the charms bar. The application will be fully touch enabled and it will support all the requirements necessary to be submitted to the Windows 8 store.

Built In Visual Studio 2012 Templates

As you would expect, Visual Studio has new project wizard that let’s you get started building your windows 8 store application. Having now been through the process of getting a token that let’s me submit applications to the store, I’ve learned that it is best when you are starting out to use the built in templates as much as possible. Microsoft has strict guidelines around building WinRT touch applications and if you don’t follow these, your application will be rejected. You of course can build the application from and empty template and follow the guidelines yourself, but you will spend a lot of time learning these specifications which are very detailed and picky. You’ll find in the UX Guidelines words like:

If you find yourself cramming things to fit, it’s okay to use 5x5 mm targets as long as touching the wrong target can be corrected with one gesture. Using 2 mm of padding between targets is extremely important in this case.

Because your app will likely be running in different resolutions and on different size devices, implementing the above requires a high level of CSS3 and HTML skills. If you use the templates that are built in to Visual Studio 2012, these kind of requirements are already taken care of for you with the included CSS3 style sheets.

Building the Conference Track Viewer

A Quick Look Ahead

Here is what our completed app will look like. Basically, it has two pages. The first page shows all the tracks and it includes an image background that represents each track.

image

Then, when the user touches any of the tracks listed, they will be presented with the details of that particular track as follows.

image

In addition, there will of course be the charms bar on the right for both search and about and on the bottom will be the app bar that let’s you refresh in case one or more tracks have changed. As mentioned earlier, there are a lot more areas we could improve upon to make this app take much better advantage of the windows 8 platform. Just to mention a couple, we could add push notifications so that refresh is not necessary and we could add a communication charms so the user could email or tweet a session or track.

Create a New Project for the JavaScript Windows 8 App

First thing to do is to create a new Visual Studio 2012 JavaScript Windows 8 project. We of course want to select from a template that most closely matches our scenario for the reasons mentioned in the previous section “Built In Visual Studio 2012 Templates”. So, File/New/Project/JavaScript (from the opening screen of Visual Studio). Knowing that we want to display a top level view of all the tracks, with a drill down detail view of each track showing the individual sessions, it seems like the “Split App” is going to be our best choice so let’s choose that.

image

If we run this application, out of the box you can see the similarities to the conference track viewer we plan on building. Below are some small screen shots of how that looks.

image image

Changing Where The Data Comes From (static to JSON)

One fundamental difference between the Split Item template and the actual Conference Track Viewer we are building is that the data in the template is static data that is simply created by defining JavaScript arrays. In my opinion, the sample would have more value if those arrays were loaded as if they were coming from an asynchronous source rather than just static memory.

So, if you look closely at the project created in Visual Studio 2012 from using the Split Template, you’ll see a that in the default.html, there is a reference to a JavaScript file called data.js. Basically, that data.js file executes an anonymous JavaScript function which assigns the the namespace “Data” several properties that include both data and functions. Because “Data” is a global, this means that after this anonymous JavaScript function executes (which happens when the application first loads), all that was initialized, stays initialized (thank you JavaScript Closure). I know that’s a bunch of charged words I just said, but at the end of the day what exists is a bunch of functions and data that were declared dynamically. Lines 16 to 23, as shown below of data.js really are the full definition that get used by the rest of the application to retrieve the data.

 WinJS.Namespace.define("Data", {
        items: groupedItems,
        groups: groupedItems.groups,
        getItemReference: getItemReference,
        getItemsFromGroup: getItemsFromGroup,
        resolveGroupReference: resolveGroupReference,
        resolveItemReference: resolveItemReference
    });

If you look, for example in items.js, you will find at line 12, code that sets the datasource of the list to be Data.groups.dataSource which is basically pulling data from the global declaration of data above.

ready: function (element, options) {
  var listView = element.querySelector(".itemslist").winControl;
   listView.itemDataSource = Data.groups.dataSource;
   ...

So, the challenge here is we need to load the data into the “Data” global namespace from a JSON web service and not from static data. To do this, we need to take into consideration that we can not simply include a static JavaScript file in our application and expect the data will be loaded. It is not that that static JavaScript file can not load the data, the problem is that we need to synchronize that loading such that the actual list (grid of tracks in our case) loads after the JSON service returns it’s data.

So, in order to do this, we change things up a little. In our Tracked Session Viewer, instead of loading the data in a static JSON file, we simply define the functions for loading the data in a static JSON file (we call it loadData now), then, in default.js, which is the code that executes when the program first launches, we add some code that calls loadData() function in the initial “activated” event of the full application. We pass in the navigator object to this function so that after the data is fully loaded (using asynchronous JavaScript) we then, and only then let the applicaiton navigate to our session tracks page which now can display data because it is fully loaded. The code in our program activation event (default.js) looks something like this:

app.addEventListener("activated", function (args) {

    if (args.detail.kind ===
        activation.ActivationKind.launch) {
        if (args.detail.previousExecutionState !==
            activation.ApplicationExecutionState.terminated) {
        } else {
        }

        if (app.sessionState.history) {
            nav.history = app.sessionState.history;
        }
        args.setPromise(WinJS.UI.processAll().then(function () {
            var searchString = "";
            svcc.Functions.loadData(nav, searchString);
        }));
    }
});

And then, in our loadData method, which is included as a separate file, we have code that does the Async JavaScript call and when that data load completes, does the navigation to our first page (looking like the following):

WinJS.Namespace.define("svcc.Functions", {
    loadData: function (nav, searchString) {
        var searchStringLower = "";
        if (searchString && searchString.length > 0) {
            searchStringLower = searchString.toLowerCase();
        }

        var sampleGroups = [];
        var sampleItems = [];

        var imageParams =
            "?width=160&height=160&mode=pad&scale=both&anchor=middlecenter&format=png";
        var urlString = svcc.Constants.baseUrl +
            "GeneralHandlers/Tracks.ashx?codecampyear=6";
        var xhrOptions = { url: urlString };
        var that = this;
        WinJS.xhr(xhrOptions).done(function(myXhr) {
            var result = JSON.parse(myXhr.response);
            for (var i = 0; i < result.rows.length; i++) {
                var trackId = result.rows[i].TrackId;
                var trackName = result.rows[i].TrackName;

All the source for this application is included in a link at the top of this article so feel free to download it and take a look at all the details.

Adding Search Functionality

We’ve talked about how to retrieve data the first time, but now let’s talk about what happens when the user wants to look for a specific speaker or session by some search string. Because we knew we wanted to add this later, we added to our loadData() function a search string parameter. When passed in as null, all tracked sessions are downloaded. However, when a value is passed in, we’ve added simple JavaScript code to the loadData() function to filter those results.

So, let’s follow the guidelines posted at http://msdn.microsoft.com/en-us/library/windows/apps/hh465238.aspx for how to create a minimal search in a Windows 8 JavaScript app. In our application, there are only a few things that are required.

First, in the split.js JavaScript file, add the following line of code in your page/ready label. This causes the search charm to automatically come up as soon as the user starts to type anything on the keyboard. That is, the user see something like the following and as soon as the press the search button or press enter, they event defined a little further below here will get executed.

image

And the code looks as follows:

// This function is called whenever a user navigates to this page. It
// populates the page elements with the app's data.
ready: function (element, options) {
    Windows.ApplicationModel.Search.SearchPane.getForCurrentView().
        showOnKeyboardInput = true;

Then, at the top of that same file (split.js) add an event listener that responds when ever a search is performed. That code is actually executed when the user types

// Register event handler for QuerySubmitted
Windows.ApplicationModel.Search.SearchPane.getForCurrentView().onquerysubmitted =
    function (eventObject) {
        var searchString = eventObject.queryText;
        svcc.Functions.loadData(nav, searchString);
};

Adding an About Button

It is required that every program have an “About”. That is, on the charms bar, there needs to be button the user can touch that gives some information about the maker of the program (you). To do that, all that is necessary is to create a new about.html, about.js and about.css just like creating any other page (put them in the /page folder). Then, from the default.js file you need to register this new about page as follows:

// Populate settings pane and tie commands to settings flyouts.
WinJS.Application.onsettings = function (e) {
    e.detail.applicationcommands = {
        "aboutDiv": { href: "/pages/about/about.html", title: "About" }
    };
    WinJS.UI.SettingsFlyout.populateSettings(e);
};

Then, when the user brings up the charms bar, there will be a new “about” button and when that is tapped, the html page you just created (/page/about/about.html) comes flying out of the side and looks as follows:

image image

Summary

In this article, we’ve built a very simple Conference Session Tracker. It uses the Visual Studio 2012 Split template, adds to the charm bar a working search button as well as an About button. We’ve done it using JavaScript, HTML5 and CSS3. There are a huge number of things we did not talk about. This was just a short sampler of building something straight forward that has real world use. For further reading, I strongly suggest taking a dive over to http://dev.windows.com and you’ll find a wealth of resource and more information.

Good Luck with your Windows 8 App Building! See you in the Windows 8 store.

About the author

peterpict

Peter Kellner, a Microsoft ASP.NET MVP since 2007, is founder and president of ConnectionRoad, and a seasoned software professional specializing in high quality, scalable and extensible .Net web applications. His experience includes building and leading engineering teams both on and off shore. When not working, Peter spends most his free time biking. He has ridden his bike across the globe. Most recently he and his wife, Tammy, rode across the U.S., from California to Georgia, in just 27 days. You can find his blog at peterkellner.net, and follow him on Twitter @pkellner

Continuous Feedback using Storyboarding, Code Reviews, and the Feedback Tool in Visual Studio 2012 ALM

$
0
0

 

Editor’s note:  The following blog post was written by Visual Studio ALM MVP Mike Douglas

One of the key advantages of Agile or any other kind of iterative development process is to receive feedback early and often throughout the development process instead of waiting until the end.  Traditionally this has been accomplished by teams working closely together where the decision makers can simply turn around to tell developers what they like and don't like.  This type of collaboration becomes much more difficult with distributed development teams. Visual Studio 2012 has introduced several tools to help distributed teams collaborate better by being able to request, provide, and receive feedback throughout the development process.

 

Whether your team is co-located or distributed, I will discuss some of the key points throughout the lifecycle for requesting and capturing feedback. Looking at the iterative development lifecycle in Figure 1, I have identified these key feedback points in each phase.  This iterative process can be any development methodology but for this post, I will use a scrum based process to discuss the feedback points.

 

image

Figure 1 - Continuous Feedback during Application Lifecycle

 

Define

For every development methodology, there is a process for understanding what the users need and want.  In Scrum, this is commonly referred to as Release Planning.  During this process the development team will usually try to understand the requirements.  However, requirements can be misunderstood even if they are clearly defined.  Visual Studio 2012 introduces a new feature called Storyboarding to help visualize these requirements.  Instead of creating an additional tool, the Storyboarding feature was built as an add-in to PowerPoint. This provides a number of benefits including:

·         Atechnologymostdevelopersarefamiliar

·         Built in animations, presentations, and printing

·         Atoolthattheenduserswilllikelyalreadyhavetoreviewthestoryboards. 

 

Theadd-inincludesanumberofshapesfordesigninguserinterfaces.  In addition to the many included shapes, you can download and use additional shapes from the Visual Studio Gallery.

 

image

Figure 2 - PowerPoint Storyboarding

 

 

During Release Planning and the Delivery Sprints use the Storyboards to request feedback.  In Release Planning when we are defining User Stories, there is usually some discussion about each user story to determine effort and priority.  As the product owners begin to explain what they want, you should begin capturing this information as acceptance criteria.   Use the storyboards to capture design requirements. At this point in the process, acceptance criteria and storyboards won't be complete, but the goal is to not lose any of the information from the discussions.  The Storyboards must be put on a network location or as I prefer to put in the team project's SharePoint portal.  Once these Storyboards are in a shared locations, they can be linked directly to User Story for traceability and future reference as shown in Figure 3 below.

 

 

image

Figure 3 - Linking a Storyboard to a User Story

 

In release planning it is often necessary seek feedback of a larger group of Subject Matter Experts or other customers.  Requesting Feedback from those outside of the workgroup can be accomplished with Visual Studio 2012.  Request Feedback and choose the recipients, target application/website, and questions to answer.  The recipients will receive an email with links to install the Feedback Client and launch the request.  The Feedback Client as shown in Figure 4below, will open providing the user with the questions to answer and the ability to provide rich feedback through text, screenshots, and even capture voice input.

 

image

Figure 4 - Feedback Client

 

Develop

During the development of the application or also referred to as the Delivery Sprints, the User Stories are groomed and managed by the product owner. As stories are reviewed and ready to be worked, the acceptance criteria should be fully defined to provide the development team a clear understanding of what is to be developed.  Feedback can be solicited during the design of the User Story to seek input on usability or to choose between design options. In fact, you can request feedback using the Feedback Client, to request feedback on the Storyboard if you store the Storyboard on SharePoint.  The document link can be used as the URL for the Feedback Request so the user can click on the link and open the Storyboard.

 

In addition to the design of the User Story, the Feedback Requests can be used for User Acceptance Testing (UAT) to test the implementation of the user stories completed in an Iteration.  This can give the user the chance use the application and provide some meaningful feedback that he or she wouldn't be able to experience in the Sprint Demo.

 

Code Reviews are another type of feedback developers can provide to other developers.  Pair programming is one way that can provide the most immediate feedback with another developer commenting on code while it is being developed.  Many teams prefer to use Code Review meetings as the venue for providing feedback for each other's code.  While these meetings have some benefit, there usually isn't tangible and actionable results from these meetings that can be measured. 

In addition, it is often difficult to consistently hold code review meetings because it is usually the first thing that is cut when deadlines are close.  To overcome these shortcomings and challenges, Visual Studio 2012 provides a Code Review workflow that is built into Team Explorer.  This allows the developers to request peer code reviews as work is being completed instead of waiting until the end of the Sprint when it is usually too late to make changes.

 

image

Figure 5 - Code Review Request

 

The code reviews also provide measureable results by tracking and storing the reviews in Team Foundation Server (TFS) as work items. Queries and reports can be created from these work items to show that they are being performed and the details will show the quality of the reviews.

 

image

Figure 6 - Code Review Work Items

 

Operations

Deploying the application into production used to be the end of the story for developers and development teams.  In today's iterative world, teams are deploying to production much more regularly.  Some teams are deploying every Sprint or even daily.   Teams today need to manage new features with production issues that might arise.  The first way to make this more efficient is to utilize the enhancements in System Center 2012 to be able to send rich bugs to the development teams and provides a two-way sync between TFS and Operations Manager.  Read Better Together: TFS 2010 and System Center 2012 for more information.

 

In addition to receiving feedback from the operations team, PreEmptive Analytics for TFS enables your applications to automatically provide feedback when exceptions occur.  The tool is free with TFS and can be configured to automatically create or update work items in TFS and include the exception information.  This can provide valuable information that the team can use to improve the quality of the application. 

 

Summary

As you can see, feedback throughout the application lifecycle is an important theme in Visual Studio 2012.  These ALM tools will help increase the efficiency and quality of your development teams.  Furthermore these tools will also help facilitate distributed teams where Scrum would have been challenging to succeed.  Download Visual Studio 2012 to start taking advantage the Continuous Feedback features today.

 

About the author

profile

Mike Douglas is a Solution Consultant at Deliveron Consulting Services.  He specializes in working with development teams to implement  Application Lifecycle Management (ALM) solutions to eliminate traditional silos between development, testing, and project management to establish cohesive processes with the Visual Studio ALM tools.    He is also a Microsoft Visual Studio ALM MVP and Microsoft ALM Ranger with a goal to accelerate the adoption of Visual Studio with out-of-band solutions for missing features and guidance.  Mike is an experienced presenter and has spoken at local user group meetings and conferences such as HDC and KCDC.  Mike has worked with a number of technologies in his 14 years of experience building enterprise solutions including BizTalk, SharePoint, SQL Server, and WCF.  He actively maintains the open source project, Team Deploy, on CodePlex for deploying MSIs using Team Foundation Server.  Mike also enjoys sharing his experiences in Visual Studio ALM and TFS on his blog at www.CodeSmartNotHard.com.

Friday Five–September 14, 2012

$
0
0
Viewing all 788 articles
Browse latest View live




Latest Images