Document conversations does not equal hover panel post

DocConv

Back in June 2014 Microsoft/Yammer announced the arrival of a new feature call ‘Document Conversations’. Available at the time of writing in some tenants (full roll out is in progress) this feature adds a fly out panel to Document Libraries in Office365. The full details of the feature can be seen on the Office Blog here: http://blogs.office.com/2014/06/03/yammer-brings-conversations-to-your-onedrive-and-sharepoint-online-files/

Our Office365 tenant is setup with ‘First Release’ which provides upcoming updates about two weeks prior to their formal rollout. Over the last month we’ve seen the Document Conversations feature coming and going. Hopefully it will be fully completed at some point very soon. During this rollout I took the time to try out the feature and get a feel of how it works.

Document Conversations in action on OneDrive for Business

Browsing to my OneDrive for Business page nothing different appears on the display. As you can see below it still looks the same as when the ‘Site Folders’ rolled out earlier this year.

image

So to invoke the ‘Document conversation’ window you have to actually browse to the document. As you can see below a new right-hand slither has appeared with the Yammer logo and an indicator to click to expand.

image

Clicking the Document Conversations bar expands it into the right-hand pane, much like the Apps for Office do in Office 365 Pro. On first use or when you’ve not signed into Yammer you will be prompted to sign in. The screen grab is once sign in has been done. First thing to note here is it isn’t displaying any threads, that’s simply because I had newly created this document for the purpose of this article.

image

Next lets create a new ‘Yam’ from the ‘Document Conversation’ pane. At this time it allows the user to select a group to post the ‘Yam’ into, interestingly there is no way to setup a default for this. I think it would be awesome if Microsoft had provided a ‘default group’ setting on the hosting Document library settings, as i suspect the feed is using the Yammer Embed which has the ability to set a default group. That way users could configure their defaults and avoid everyone posting into the ‘all company’ group.

image

After posting the ‘Yam’ it can be seen in the ‘Document Conversation’ pane. Note how its being presented as an OpenGraph object and not a url.

DocConv

Below is an example of a reply to the conversation thread.

image

This is the same conversation thread within Yammer.

image

Posting from SharePoint to Yammer

Before the ‘Document Conversation’ feature was designed and built one of the first Yammer integrations with SharePoint was the ‘Post’ option which appeared on the document hover panels. The screenshot below shows the ‘Post’ option on the same file we just used for the ‘Document Conversation’ demo.

image

Clicking ‘Post’ launches a modal window with a url in the message body so you can type a ‘Yam’. As you can see from the screenshot this is a pretty basic UI.

image

The screenshot below is the ‘Post’ in the yammer group.

image

Comparing the two approaches

So we’ve seen how both approaches seem to work. The one thing that i found puzzling was in the Yammer group i’d seen two threads about the same document. One from the ‘Document Conversation’ pane and one from the ‘Post’ modal dialog. This didn’t make sense on first glance I would have expected both to reference the same item (url) as the OpenGraph object.

Document Conversation

The ‘Document conversation’ thread has the following JSON returned from the API.

image

The body of the ‘Yam’ contained a url of the file path with query string ?web=1 which when launched opens the document in the Office Online app.

image

The OpenGraph object is detailed below. Again not the url has the ?web=1 querystring.

image

Post from hover panel

The ‘Post’ thread has the following JSON returned from the API. We can see nothing special in the thread itself.

image

The content returned by the ‘Yam’ this time shows the WOPI (Office Online) url has been used.

image

The view of the attachments xml confirms the information.

image

So doing it again

Via ‘Post’ creates a brand new post. The ‘Post’ feature adds a brand new OpenGraph object and thus starts a new thread, rather than finding the existing thread and presenting it back to the user in the popup window.

image

Conclusion

So the two methods use different urls thus become two different OpenGraph objects.

It would be great if Microsoft could bring them into alignment so that there is one solution url so all conversations appear in the same thread.

Delve YamJam summary

Delvelogo

This week people who had their Office365 tenants setup with ‘First Release’ started to see the long anticipated Delve (formally Codename Oslo) arriving on the tenants.

Microsoft organised a YamJam for Delve in the Office365 Technical Yammer network here: https://www.yammer.com/itpronetwork/#/threads/inGroup?type=in_group&feedId=4386440

This article is a summary of the information which is correct at the time of writing.

Is Delve coming to on-prem?

A hybrid approach is more appropriate due to the complexity and processing power required to drive the OfficeGraph engine. There will be APIs to allow connection to other data sources for the signals driving the OfficeGraph.

Microsoft are planning a hybrid connector that can integrate signals and content from on-premises. They have no current timeline. This is probably going to feature for the scenarios where Lync and/or Exchange have an on-premises installation.

Privacy concerns

Some users concerns around privacy topics. The example cited was that a company Delve was showing trending documents for certain HR documents for example, psychological assistance and domestic partner coverage and maternity benefits. The question was around being able to exclude certain content from producing signals.

The documents could be excluded through the normal SharePoint permissions capabilities. Delve relies on the search index, so excluding a file or folder will exclude it from Delve as well.

Currently there is no feature to exclude documents from Delve but have them available to everyone via SharePoint/Search.

Side note about storing documents from HR in Yammer and the fact that ‘viewing’ it shows up in the activity feed in the top right. This gives people visibility on what other users are looking at, so someone looking at HR docs around maternity is kind of announcing that interest to the whole organisation. Not so good.

Delve does not show ‘who’ viewed a document ever. Trending is invoked when multiple people who have a relationship with you have accessed the doc. The author is the named entity. This is slightly confusing in the UI. At a glance the name appears to be the user who viewed the document. Careful communications would need to be done for this on rollout.

Delve only shows if someone modifies a document (this is available through SharePoint anyway). Delve doesn’t show who viewed the document, where many people have viewed the document Delve says several of your colleagues have viewed this document, but never divulges the names.

‘Trending’ does not mean a person viewed it, only that your colleagues are generating activity around it. (no information on the definition of activity).

CSOM / JSOM API availability

OfficeGraph will have an API. Current information is available here: http://msdn.microsoft.com/en-us/library/office/dn783218(v=office.15).aspx

Could OfficeGraph be consumed in PowerBI?

In theory this should be possible as its got an API.

Restricting the rollout to specific users

Like an ESN Delve thrives on a wide and deep network of users. By restricting to s subset an organisation would fall into the ‘doomed social pilot’ trap of not enough signals to add the absolute value. Obviously this is an ESN success perspective. Organisations will have reasons for this request, regulation, change mangement and security were all cited.

Also it was noted that you can disable Delve at tenant level, it was unclear as to whether this is the Delve UI alone or included the OfficeGraph underpinnings.

When will I get it?

Currently this is being rolled out to ‘First Release’ tenants first.

What is the Delve UI item display limit?

Answer: 36 documents before adding filters by using search. Microsoft said 36 was chosen as the starting point through internal MS trail data. Their data showed that click rates dropped to zero at a certain point in the page.

Microsoft’s choice of name Delve

Mixed feelings, those who aren’t English speaking said that Delve doesn’t always have a real meaning in some languages. Others just preferred Oslo, and thought Delve didn’t really jump out. As with most questions like this, nothing really bad comes of the name. Lets just hope it doesn’t get a rebrand in 6 months ;)

How will Delve handle existing content and groups?

Being search based it can pick up everything in the tenant today.

Which plans get Delve?

Office365 E1-E4 and corresponding Gov and Academic plans.

At first release Delve gets signals from Exchange, OneDrive for Business, SharePoint Online and Yammer. Primary content surfaced from OneDrive for Business and SharePoint Online team sites.

What determines the people list?

This is the top five people you interact with.

Useful links

Delve documentation: https://support.office.com/Article/Who-can-see-my-documents-f5f409a2-37ed-4452-8f61-681e5e1836f3

Delve for Office365 admins here: https://support.office.com/Article/Delve-for-Office-365-admins-54f87a42-15a4-44b4-9df0-d36287d9531b

OfficeGraph API documentation http://msdn.microsoft.com/en-us/library/office/dn783218(v=office.15).aspx

An introduction to PowerBI

PowerBI

PowerBI is the cloud version of Microsoft BI stack. After seeing some awesome things at SPC14 I decided to take a dive into the course material on MVA.

Notes from watching the MVA course on PowerBI

(MVA training course)

Power Query

(MVA session)

Power Query represents the ‘getting data’ part of the stack. It’s available as an Excel 2013 plugin (Download from Microsoft). Over time this will be the way to get data into Excel, replacing the ‘Data’ and Power Pivot ‘Data’ ribbon features.

It has a huge number of available data source connectors:

  • From web which can pull in web based data sources
  • From file which can pull in from files like csv
  • From database which can pull from a wide variety of database systems
  • From other sources, basically a collection not matching the above categories such as SharePoint lists, the Azure services, Facebook

This list will continue to grow as the Microsoft team build more and more extensions.

There is a great feature for finding data. The ‘Online Search’ can find published data and queries from across the web (Microsoft maintain a huge collection in their catalogue), from you internal organisations catalogue and from shared queries.

Once a query is configured Power Query does as much as it can to pass the right query to the datasource. So for example if you’re pulling relational data from a DBMS Power Query is passing down the restrictive query to that DBMS to push the query load onto that system. This helps to ensure performance is optimal. Where a user is using flat files then it really can’t be offloaded so the local machine bears the brunt of the performance hit. This would be something to consider when designing solutions. I personally think this intelligence is amazing and could really play well when designing a self service BI solution.

The query can pull the data into either the local Worksheet or into the Data Model. Pulling into the Worksheet is really best employed where the dataset is of the smaller end of the spectrum, pulling into the Data Model is where larger data collections would benefit. Also consider your file size, bringing the data into the model drastically reduces its footprint. They gave an example of a 4Gb file being compressed into about 160mb file. This is important when you consider the maximum file size in PowerBI is 250mb.

Another neat feature is pulling data from a ‘folder’ into one collection. So you can bring together multiple csv files for example in a straight forward fashion.

The query can actually parse things like JSON which opens up possibilities to call data services and simply transform them to tables and related tables.

Power Query Editor

Once you create a new PowerQuery it launches the editor where you can perform loads of neat things on the data.

  • Manipulate the columns, like ordering, name, format, using them as headers
  • Splitting columns by delimiters etc.
  • Filtering
  • Removing duplicates
  • Merging additional queries into the same sheet, thus merging datasets
    • Bringing in just columns you want
    • Creating your own dimensions
    • Choosing which columns you merge on
  • Unpivoting data, as you often get data in a pivoted format as it is generally how it gets presented for reporting
  • Formula bar, which allows you to use the query language to create additional data interactions

Data stewardship

(MVA session)

Session about examining the problem space, what are the things business is trying to solve?

Information challenges:

  • Searching for data takes time, there are lots of useful datasets within an organisation which are often not available to others
  • Preparing it for use takes time, it is not always clean or in a sensible format for reuse, lots of redundant processing
  • IT spends time trying to service requests, due to the platforms and processes most users have to get IT to do the heavy lifting to create views, reports etc., they also often can’t react in a timeframe the business needs
  • IT has to also govern access and data use
  • Lack of trust is significant with business user community, because the stewardship of the data is often unknown most users question the validity or authenticity of the data

In some organisations the role of ‘Data Steward’ exists. These people tend to have a foot in both end user and IT camps. They tend to have understanding of what IT have provisioned and what the business needs.

Quote from Gartner “… only business users close to the content can evaluate information in its business context”

Data stewards can promote queries into the corporate data catalogue and help the business users understand them.

(note to self – using Yammer groups to help here would be worth looking at)

It’s an interesting viewpoint. Consider a scenario where a dataset is held in an Excel workbook and that gets emailed to another person. At this stage you have now forked the data and it has instantly become less trust worthy and accurate. Now imagine if instead you’d shared the query, now the data remains a single source and thus more trust worthy. In business it’s often the earlier stages of report creating that datasets which could be useful to be shared are created. By looking to share the queries rather than the end product reporting we are enabling more uses of the data building blocks.

The common steps an information worker is taking:

  • Identifying the need, what is the problem they are trying to solve
  • Identifying the data, what data could help them solve the problem, which domains are they from, who owns them

The ‘Data Steward’

(taken from the MVA slides)

image

Data Catalogue

This is the way we can promote the data:

  • Stores and process the metadata about the data sources available, users and their relationships
  • Provides the search functionality
  • Connects to the corporate data through the integration layer

It is a set of Office365 services.

Important note, sharing to the catalogue is only sharing the metadata of the query not the queried data itself. So the important thing to realise is the user who executes the query needs to have access to the data still.

Power Query

For Information workers:

  • Search for and access relevant data
  • Filter and merge data

For Data stewards:

  • Define repeatable data queries
  • Publish and share and annotate those queries

Data Management Portal

  • Manage metadata shared queries and published data sources
  • Monitor telemetry, the usages search etc.
  • Gain insight on data lineage

Admin Centre

  • Govern the data integration layer, admin the on-prem access

Data Publishing

Diagram taken from MVA session

image

Shows the multiple ways to publish your data to the data catalogue.

Every Office365 PowerBI tenant gets their own ‘Corporate Data Catalogue’.

The admin centre in Office365 looks something like this

image

By default when a user searches they search across both Corporate and the Microsoft public catalogues.

Some of the old drawbacks about documenting data services were that often finding the endpoint service and reading about it were disconnected. This led people to question the investment in the documentation as it was often missed. With PowerBI this descriptive is right there next to the result in PowerQuery making the metadata instantly accessible and therefore valuable.

To get all these capabilities you need to be ‘signed in’ to the PowerQuery via your tenant. Sounds obvious but its very important to factor into the messages during rollout.

Share Query dialog

So as the Data Steward you can choose to ‘share’ your query from within Power Query once you’re happy with the query configuration.

The example from the session looked like this.

image

So the shared query has the following:

  • Name which is the visible name for that query, here you should be descriptive to help people quickly scan a list for it
  • Description, helping further explain the data query, one point to think about is what sort of copy style your organisation needs, it shows up anywhere the query appears
  • Data sources, this lists those data sources being used inside this query, this will be important when you consider the data lineage
  • Sharing settings, if the user sharing is part of the ‘Data Steward’ role then they can certify the data as officially supported/verified as truth, ‘Share with’ is pretty obvious
  • Documentation url, the url to anything which is classed as the documentation, in larger organisations this could be the direct document office online url
  • Upload preview rows, does exactly what it says on the tin

Note that the data sources within the organisation can themselves be annotated to provide friendly information to the user. In the dialog above the ‘view portal’ link goes to the PowerBI admin screen which allows further description to be created for that data source. Notice the contact information to help a user to gain access to this resource.

image

Some insights from the analytics would allow you to work out the highest utilised queries/data sources and some that show up a lot in search but are rarely used. All this starts to help drive value and returns from the exploration and sharing of corporate data.

Power View

(MVA Session)

PowerView is the ‘visualise’ element of PowerBI. It is the interactive, data exploration, visualisation and presentation capability. It is based in Excel 2013.

The session scenario:

image

A retailer establishment who’s sales are collected through the POS system and contained in the traditional hierarchy of Category – Sub Category – Item

The role in the scenario is of the ‘Bar Owner’ who has little technical experience but wants to analyse his POS data to create some targeted promotions.

The demo brought in data into a Power Pivot model as below:

image

The basic step to start is to go into your sheet and from the ‘Insert’ ribbon choose ‘Power View’. The Power View brings in all the available fields and tables from the data model.

Three ways to add fields to the canvas:

  • Check the check box in the fields list in the tool pane
  • Drag it to the canvas
  • Drag it to the ‘Fields’ list in the tool pane

Once you have selected the fields you can change the representation from the ribbon options. Manipulating the display is as easy as selecting the ‘style’ you desire and from the tool pane you can manipulate the fields and filters.

It has some very neat features that when you interact with one of the graphs it will highlight the same data pivots in other graphs in the sheet. This ‘cross highlighting’ works across all the graphs in the view. Even legends are interactive.

Removing the title from individual graphs is done by selecting the graph and from the ‘Layout’ ribbon selecting ‘None’ from the ‘Title’ button.

By the end of the report creation it looked like this

image

If the default colours and design are not to your liking you can change the theme and background from the ribbon.

To add filters to the report, simply drag them from the tool pane list onto the ‘Filters’ section of the report.

Due to the interactivity of the report, a user can really drill down and use the canvas elements to gain insights into their data.

Demo included adding a ‘KPI’ from the PowerPivot ribbon.

image

Pick the column to base the KPI from and then configure the values and display as you require. Once added it will update the PowerPivot data model and this new field appears in the Power View tool pane for selection (it has a traffic light icon).

Adding a in canvas filter is as simple as dragging the field from the tool pane onto the canvas. It will then add that single column onto the canvas, but we can go a step further and turn it into a ‘Slicer’ from the ribbon. Now when a user selects a value from that column it cross highlights the rest of the canvas.

Demo then showed how to rollup the view using ‘Matrix’ for the main table and enabling the ‘Show levels’ to rollup the data further. To modify which columns get displayed is again done through the tool pane options.

In the tool pane when you drag a single field to the ‘Tile By’ option and it creates tabs which creates a design surface within the tabs. You then set up the graphs within this surface.

Power Map

(MVA Session)

Power Map is the mapping visualisations for your data.

Different steps to go through to create a Power Map visualisation:

  • Map data
    • Data in Excel, Power Pivot data model
    • Geo-code with Bing
    • 3D and 4D visuals
  • Discover insights
    • Play over time frames
    • Annotate anything interesting
    • Capture a timeframe as a scene
  • Share
    • Add effects
    • Interactive touring
    • Share workbooks

From the ribbon you can choose the ‘Map’ option.

image

Building a new tour

A tour allows you to present the information and guide the viewer across it.

image

The left hand side is the scenes, the centre is the scene content and the right hand side is the data.

Interesting note is that the ‘other’ column can basically allow you to map anything that the Bing mapping API could map to be used from your data. The example given was airport codes.

image

Adding more than just text annotation, the example here is a picture

image

During a tour you can still interact with the map, pause and drill down into the display.

image

PowerBI and mobile BI

(MVA Session)

You can add the PowerBI from the app catalogue.

As a note you can default site samples into your PowerBI site to help you get an understanding of some of the features.

To have PowerBI features light up for a specific Excel document it needs to be ‘Enabled’ in the options (part of the tile).

From the PowerBI dashboard clicking the file opens it within the Office Online.

‘Featured Reports’ is an interesting feature as it allows you to promote a report across all the users of your PowerBI capability. This is a simple as clicking ‘Add to featured reports’ from the tile context menu. There are no specified limits to the number of reports that can be set to ‘featured’.

If you don’t want to share a report to everyone but still want a short cut for it you can select ‘Favourite’ from the tile context menu to effectively pin this report as favourite. These are then found in the ‘My Power BI’ in the suitebar menu.

You can also schedule a data refresh from the tile context menu. This will call the data source and invoke a data refresh into the report.

Mobile PowerBI

You can install the Windows 8 app from the store. (Interesting it’s mobile when in fact its the Windows OS Winking smile)

By default it contains the Microsoft sample data. As seen in the screen grab below.

image

You can then add your own reports by ‘Browse’ from the context bar, select your site and pick up the Excel file containing your report. Tag it as favourite and it’ll show up on the main dashboard. Clicking into the report will then load the same Power View report as you would see in the browser.

image

You can also take advantage of the normal ‘Share’ capability from Windows 8. Here I’m choosing to share a report via email.

image

The thumbnails for the reports are also dynamically updated at intervals to reflect the actual view of the report. So even this adds a ‘peek’ style to the dashboard.

Natural Language Querying with Q&A

(MVA Session)

Q&A is natural language querying. Its primary focus is about data exploration.

Key problems Q&A is aiming to assist organisations with:

  • How do I find the right data to answer the question I have at this point in time? There could be numerous data sources and how would you know which one to look at?
  • How do I find the right answer from within the huge collection of reports? Maybe the reports just don’t give the right slice of data or present the information in a form which makes sense.

So Q&A was born to address both of these challenges.

The demo scenario is covering a typical customer data model.

image

From within the PowerBI site to enable a Excel workbook for Q&A you choose the ‘Add to Q&A’ from the tile context menu.

image

This is an important step to remember as Q&A only searches within ‘enabled’ workbooks. Another thing to note is that SharePoint permissions still dictate what a user can access. For example if a workbook is only shared with certain people, then only those people can Q&A against it even though it is enabled.

To use Q&A you click the option in your PowerBI site and you’ll see something like this below.

image

Notice that the UX is very ‘search’ like, this was done consciously by Microsoft to entice users to use Q&A like they would a search engine.

So after the user starts to type a question the Q&A begins to work its magic. Immediately it narrows down which data models it might be using. Notice the blue text below the query box is the ‘Restatement’, this is how Q&A is transforming your typed question and querying the data models. Also below that are query suggestions that Q&A thinks you might also be trying to perform.

image

A really neat feature is the visual representation of the returned query is best guessed by Q&A to provide the user with what it considers the best display for the results. On top of this is can also apply filters, as you can see I’ve added a ordering to the query.

image

As you can see the filter pane is also available within the canvas for further filtering and settings.

The query box will also help the user because it greys out words which it can’t map to the data model.

Building models for Q&A

Sometimes there are things that just need some extra work to help Q&A assist the user. Things like naming conventions in the data model versus the naming the average business user uses. You can use ‘Synonyms’ in the data model to teach Q&A the other terms.

The formatting and modelling you use to drive Power View is also supporting Q&A providing a better experience.

  • Use the correct data type for columns to help Q&A understand how to make the query to the model.
  • Set up the ‘Default Field Set’ from the ‘Advanced’ ribbon. These settings determine which fields are brought onto the canvas in Power View and they also allow Q&A to display those fields when a user just types the name of the table.
  • Set up the ‘Table Behaviour’ from the ‘Advanced’ ribbon. Setting the ‘Default Label’ allows Q&A to use the correct axis.
  • Set up ‘Synonyms’ within the data model from the ‘Advanced’ ribbon.

Note: At the time of writing this feature only comes with the Office 365 Office version from your tenant software subscription.

Synonyms are displayed within a tool pane against your model. Within each field you can add other words which might be used while looking for that column in a comma delimited list. The first one listed is the ‘primary noun’ for that entity, this is what Q&A will use in the restatement (the blue text under the user entered query).

image

Sharing with Q&A

The first option is to simply copy the url, like most search experiences, once you enter a query the url reflects this query. Thus you can distribute this url to other to effectively reuse. If you think about this wider, imagine you are asked a question by a business user, you can now find the data via Q&A and help them jump in, and then further explore from that point.

The second option is to add the question the ‘Featured Questions’ in the PowerBI site.

image

This allows you to setup various settings for the question before adding it, such as the question itself, whether to show it on the home screen, size and colour and finally and image. The image below is the home screen with the question added.

image

This image is of the Q&A screen with the question listed.

image

Behind the scenes in Q&A

The overall steps are:

Search for interpretations of question – looking across all the workbooks enabled for Q&A and finding which ones could answer the question.

image

Score and select the best interpretation – ranking the workbooks answering capability.

image

This ‘Ranking’ means that Q&A will be using the top option.

image

Select best visual – determine which visual can explain the answer. This runs through a rules engine.

image

Influencing the best answer

Search for interpretations of question > Use data modelling and synonyms

Score and select best interpretation > workbooks in the site

Select best visual > data modelling (data types, data categories)

Data Management Gateway

(MVA Session)

The Data Management Gateway has three main capabilities:

  • How can I enable corporate data feeds over OData
  • How can I enable discovery in Power Query, enabling people to find data
  • How can I refresh Excel Workbook data suing SharePoint Online

The conceptual layout for the Data Management Gateway is detailed in the diagram.

image

The Data Management Gateway sits in two places, one on-prem through a client and one online through PowerBI (PowerBI admin centre).

How does it work?

OData feeds through the Data Management Gateway

image

  1. Power Query requests data from the OData feed
  2. Data Management Gateway connects to the data source
  3. Results are returned to the Data Management Gateway
  4. The Data Management Gateway returns the data to Power Query

Data refresh

image

  1. Excel workbook is loaded into SharePoint
  2. Data refresh is called
  3. Connects to the Gateway Cloud service
  4. The Gateway Cloud service checks the users authorisation to perform a refresh
  5. If authorised sends the command to the on-prem Data Management Gateway
  6. Data Management Gateway sends the command to the data source
  7. The data source returns the results to the Data Management Gateway
  8. The results are transferred up to the Gateway Cloud service
  9. Returns the data to the Excel workbook

PowerBI Admin Centre

The admin centre provides the following:

  • Allows you to install and monitor the Data Management Gateways for the organisation
  • Configures access to the cloud enabled data sources
  • Exposes OData feeds to the corporate data sources
  • Configures the PowerBI user roles

You can access the Admin centre from either your tenant admin portal or directly via https://itadmin.clouddatahub.net

image

Interesting note, that the data source setup can be configured with either Windows or Database account. BUT you still need to add users to the datasource for them to actually access it.

SP.RequestExecutor cross domain calls using REST gotcha

CrossDomain

Recently I was building a prototype SharePoint hosted app to add items into the Host web. The basic operation of the app is that it queries the Host web for specific list types, then allows a user to add a collection of new items to the selected list. So read and write operations.

When dealing with the Host web it is important to remember that you are then subject to ‘cross domain’ calls and the restrictions in place for them. The browser protects users from cross site scripting and specifies that the client code can only access information within the same URL domain.

Thankfully SharePoint comes with some inbuilt options for these calls. The Cross Domain library is the primary option in either JSOM or REST forms.

I’ve been leaning towards REST mainly at the moment primarily as a focus for learning so I could get used to this method of data interactions.

So the first code sample is to get the host web task lists:

NB: This is a cut down extract of the function just to highlight the core request.

var executor;

// Initialize the RequestExecutor with the app web URL.
executor = new SP.RequestExecutor(appWebUrl);

//Get all the available task lists from the host web
executor.executeAsync(
{
url:
appWebUrl +
“/_api/SP.AppContextSite(@target)/web/lists/?$filter=BaseTemplate eq 171&$select=ID,Title,ImageUrl,ItemCount,ListItemEntityTypeFullName&@target=’” + hostWebUrl + “‘”,

method: “GET”,

headers: {  “Accept”: “application/json; odata=verbose” },
success: successHandler,
error: errorHandler
}
);

Note from this sample how the host and app web urls are used within the url combined with the SP.AppContextSite. This is the key to invoking a cross domain call in REST using the SP.RequestExecutor

The second snippet of code is the one which adds the new item to the host web list:

NB: This is a cut down extract of the function just to highlight the core request.

var executor;

// Initialize the RequestExecutor with the app web URL.

executor = new SP.RequestExecutor(appWebUrl);

var url = appWebUrl +
“/_api/SP.AppContextSite(@target)/web/lists(guid’” + hostWebTaskList.Id + “‘)/items?@target=’” + hostWebUrl + “‘”;

//Metadata to update.
var item = {
“__metadata”: { “type”: hostWebTaskList.ListItemEntityTypeFullName },
“Title”: item.title,
“Priority”: item.priority,
“Status”: item.status,
“Body”: item.body,
“PercentComplete”: item.percentComplete
};

var requestBody = JSON.stringify(item);

var requestHeaders = {
“accept”: “application/json;odata=verbose”,
“X-RequestDigest”: jQuery(“#__REQUESTDIGEST”).val(),
“X-HTTP-Method”: “POST”,
“content-length”: requestBody.length,
“content-type”: “application/json;odata=verbose”,
“If-Match”: “*”
}

executor.executeAsync({
url: url,
method: “POST”,
contentType: “application/json;odata=verbose”,
headers: requestHeaders,
body: requestBody,
success: addPrimeTasksToHostTaskListSuccessHandler,
error: addPrimeTasksToHostTaskListErrorHandler
});

Ok so at this point you’re probably wondering what is the gotcha mentioned in the title. Well here it comes and it’s one of those cut and paste horror stories which costs developers all over the land huge amounts of wasted effort.

So if you take a look at the following block of code

var url = appWebUrl +
“/_api/SP.AppContextSite(@target)/web/lists(guid’” + hostWebTaskList.Id + “‘)/items?@target=’” + hostWebUrl + “‘”;

 

//TODO: find out why this works but the below fails with not actually performing the update
//$.ajax({
executor.executeAsync({
url: url,
type: “POST”,
contentType: “application/json;odata=verbose”,
headers: requestHeaders,
data: requestBody,
success: addPrimeTasksToHostTaskListSuccessHandler,
error: addPrimeTasksToHostTaskListErrorHandler
});

You’ll notice i copied over the structure of the method from a normal $.ajax call. THIS IS MY MISTAKE!!!!

As with many things the devil is in the details. By using this ajax snippet I’d introduced a bug which took nearly 4 hours to work out (very little found on the popular search engines about this). The worst part is that the call fires and comes back with a 200 success and even enters the success handler, BUT the action is not performed.

So what is the cause? Well basically there are subtle differences in signature.

  • The ajax call ‘type’ should be ‘method’ in the SP.RequestExecutor
  • The ajax call ‘data’ should be ‘body’ in the SP.RequestExecutor

So there you have it, two word typo’s which throw no errors but cause a logical failure in the code.

I hope this helps someone else avoid the pain Open-mouthed smile

Some really useful information about this capability can be read at:

Chris’ app series covers using this library in anger – http://www.sharepointnutsandbolts.com/2012/11/access-end-user-data-in-host-web-from.html

Apps for Office and SharePoint blog article discussing the inner workings and options for cross domain versus cross site collection – http://blogs.msdn.com/b/officeapps/archive/2012/11/29/solving-cross-domain-problems-in-apps-for-sharepoint.aspx

Using REST in SharePoint apps – http://msdn.microsoft.com/en-us/library/office/jj164022.aspx

One final comment, MSDN Code has this sample: http://code.msdn.microsoft.com/SharePoint-2013-Get-items-7c27024f/sourcecode?fileId=101390&pathId=1361160678 which doesn’t really demo cross domain at the time of writing as it isn’t using the code correctly against the host web in my opinion.

Awarded Microsoft MVP 2013 for SharePoint Server

620MVP_Horizontal_FullColor

Literally minutes before leaving work on 1st  October I received the following email:

MVPEmail

This left me utterly speechless, which is not something that normal happens to me. After re-reading it about ten times the news finally sank in. I’m ecstatic that Microsoft have awarded me the MVP award for my contributions to the SharePoint community.

So I wanted to share a number of thank you’s for everyone who has supported me along the way.

Most importantly my beautiful wife who has supported my many many hours chained to my PC in the office at home beavering away on CKSDev or Presentations instead of snuggling up in front of a movie on the sofa. Without her understanding and support I definitely wouldn’t have found time to make the contributions I have. Also sitting through many hours of practice presentations before my speaking events.

Matt Smith for four years ago helping me to understand how to start to become involved in the SharePoint community.

Waldek Mastykarz for his friendship, support and advice and those midnight coding adventures before releasing a new version of CKSDev. Thanks buddy :)

Chris O’Brien for his friendship, advice and putting up with some seriously deep technical conversations over the years when I’m trying to get my head around a SharePoint feature and Visual Studio API.

The CKSDev team past and present, Wouter, Matt, Waldek, Todd, Paul, Carsten and Wictor who all at some point over the last few years have advised, written some cool code and generally helped ensure CKSDev was meeting everyone’s needs.

The event organisers for SUGUK, Evolutions/Best Practices London, SPSUK and SPSNL (Steve Smith and Combined Knowledge, Brett, Tony and Nick, Mirjam and the DIWUG peeps) for providing me opportunities to present and be part of the event teams.

My employers Content and Code and especially people like David Bowman and Tim Wallis who gave me time and support to contribute to the community. Not to mention Salvo di Fazio, Tristan Watkins and Ben Athawes.

To conclude what has been quite an emotional post to write I just want to say thanks to everyone in the community and MVP community who have helped me all these years. There are too many to list, but you know who you are :)

CKSDev for Visual Studio 2012 version 1.2 Released

CKSLogo
The 1.2 version has just been released. It contains loads more features you had available in VS2010. All of the other features will be coming over the coming weeks as personal time allows. As you can imagine it’s no mean feat to port all of the existing features and support multiple SharePoint versions. The code base is also on a diet as code bloat was getting a little bit crazy as the tools evolved over the past 4 years.
 
You can find CKS: Development Tools Edition on CodePlex at http://cksdev.codeplex.com.
 
Download the extension directly within VS2012 from the extension manager or visit the Visual Studio Gallery.
We’re still looking for extension ideas so please post in the CKS Dev Discussions anything you think would make life better for SharePoint developers.
 

CKS Dev Feature highlights

Environment:
  • Copy Assembly Name – From the context menu of a SharePoint project you can now get the full assembly name copied to the clipboard.
  • Cancel adding features – Automatic cancellation of new SPIs being added to features. You can turn this off via the CKSDev settings options.
  • Find all project references – Find where a SPI/Feature is being used across the project from the project context menu.
  • Activate selected features – Setup which package features you want to auto-activate from the project context menu.
Exploration
  • No new features
Content
  • ASHX SPI template – Produces a full trust ASHX handler.
  • Basic Service Application template – Produces a full trust basic service application.
  • Banding SPI Template – Produces a full collection of SP2010 branding elements baseline items.
  • Contextual Web Part SPI template – Produces a contextual ribbon enabled web part.
  • WCF Service SPI template – Produces a full trust WCF Service endpoint.
  • Web template SPI template – Produces a SP2010 web template.
Deployment
  • Improvements to Quick Deploy – Performance improvements with a switch from calling into GACUtil.exe and returning to direct GAC API calls to improve performance. Also removal of ‘custom tokenisation’ for now until a more performant version is tested.
  • Quick deploy GAC/Bin deployment configuration – A deployment configuration which runs the pre-deployment command line, recycles the application pool, copies the assemblies to either the BIN or GAC depending on their packaging configuration, and runs the post-deployment command line.
  • Quick deploy Files deployment configuration – A deployment configuration which runs the pre-deployment command line, recycles the application pool, copies the SharePoint artefacts to the SharePoint Root, and runs the post-deployment command line.
  • Quick deploy all deployment configuration – A deployment configuration which runs the pre-deployment command line, recycles the application pool, copies the assemblies to either the BIN or GAC depending on their packaging configuration, copies the SharePoint artefacts to the SharePoint Root, and runs the post-deployment command line.
  • Upgrade deployment configuration – A deployment configuration which runs the pre-deployment command line, recycles the application pool, upgrades the previous version of the solution, and runs the post-deployment command line.
  • Attach To IIS Worker Processes Step – Which attaches the debugger to the IIS Worker process during deployment.
  • Attach To OWS Timer Process Step – Which attaches the debugger to the OWS Timer process during deployment.
  • Attach To SPUC Worker Process Step – Which attaches the debugger to the User Code process during deployment.
  • Attach To VSSPHost4 Process Step – Which attaches the debugger to the Visual Studio deployment process during deployment.
  • Copy Binaries Step – Copies the assemblies during deployment to either Bin or GAC.
  • Copy To SharePoint Root Step – Copies the files during deployment to the SharePoint Root.
  • Install App Bin Content Step – Copies the files to the App Bin folder during deployment.
  • Install Features Step – Installs the features during deployment.
  • Recreate Site Step – Recreates the deployment targeted site during deployment.
  • Restart IIS Step – Restarts IIS during deployment.
  • Restart OWS Timer Service Step – Restarts The SharePoint Timer service during deployment.
  • Upgrade Solution Step – Upgrades the solution during deployment.
  • Warm Up Site Step – Calls the site to get SharePoint to warm it up during deployment.
To see more details and the full feature list visit http://cksdev.codeplex.com/documentation.
 

Visit the CodePlex site for more information about the features and release.

Share and Enjoy and thanks for the continued support

SPSNL 2013 Presentation

SPSNL

Saturday 29th June saw the 2013 SharePoint Saturday Holland. Another great event organised with so many quality speakers and companies in attendance. It was a privilege to be invited to speak Smile

I presented a session on Apps for Office 2013 and SharePoint 2013, the slides can be seen below. I hope everyone found the session useful Smile I certainly enjoyed presenting to such an interactive audience.

I think Apps for Office is one of the coolest new features of Office and SharePoint 2013 and my session gives a really quick overview of the Apps for Office solution space. Then the hook up between SharePoint and Office that is now possible through the demo solution.

Over the next few months I’ll be publishing a dedicated series for Apps for Office so stay tuned for more soon.

Thanks to everyone who helped organise the event.

Custom SharePoint Item and Project Template gotchas for VS2012

VS2012Logo

This post is a quick brain dump of some of the challenges I’ve been working through updating CKSDev to work with Visual Studio 2012.

Item Templates

 

You can extend the SharePoint Visual Studio tooling in many different ways. One of which is by creating new Item Templates. Item templates are the code templates which appear in the ‘add new’ dialog in the solution project. Microsoft wrote several good walkthroughs http://msdn.microsoft.com/en-us/library/vstudio/ee256697(v=vs.110).aspx as an example.

In Visual Studio 2010 the SharePoint items were grouped in a basic ‘SharePoint’ category and deploying them via a VSIX package was actually dead simple. Edit the CSProj and replace this VSTemplate element with the following XML, and then save and close the file.

<VSTemplate Include="ItemTemplate.vstemplate">
<OutputSubPath>SharePoint\SharePoint14</OutputSubPath>
</VSTemplate>

The OutputSubPath element specifies additional folders in the path under which the item template is created when you build the project. The folders specified here ensure that the item template will be available only when customers open the Add New Item dialog box, expand the SharePoint node, and then choose the 2010 node.

So that worked fine in Visual Studio 2010 for SharePoint 2010. CKSDev for VS2010 had some smarts under the covers for packaging, but also followed this basic process.

So why is this an issue for Visual Studio 2012????

Well as you can see Microsoft moved the items into a new category called ‘Office/SharePoint’

image

In true Lt Gorman style (Aliens reference when the troops realise they can’t fire the pulse rifles under the reactor)

SO??, So what?

Rather then setting off a thermo-nuclear explosion this just causes a real headache for custom template VSIX packaging.

Each VSTemplate item in your solution has a property called ‘Category’, this is where you can tell VS to package that template under a specific category. Great so all you have to do is match the path in the new item dialog. Yes in theory.

The challenge comes that the VS/MSBuild bits behind the scenes understand a forward slash as a new folder. So you end up with Office with a sub folder of SharePoint. Not what you wanted. Sad smile

Ok so what next?

Well another thing to try is to match the OOTB folder path for the native MS templates. They live under “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\CSharp\Office SharePoint\1033”

So setting the ‘Category’ to ‘Office SharePoint’ should work right? Wrong more Crying face ensue as this time the space is encoded to %20 and the templates don’t show up at all.

This process of elimination had by this point taken well over two hours and was getting somewhat frustrating.

So as i mentioned earlier CKSDev already has a subtle difference to its packaging. This is due to it being distributed via the Visual Studio Gallery. The VS gallery has some nasty folder length checks (less that a combined 256) which mean the CKSDev templates are compacted folder and name wise to be as short as possible. The VS Gallery rules don’t understand the fact the tools couldn’t install on XP and check for the length anyway and reject if over 256.

Another example of this technique in action is here at Aaron Marten’s blog article which explains how this works.

The CKSDev packaging uses some MSBuild stuff to create an ‘I’ for item templates and ‘P’ for project templates during build. The VSIX then picks them up from there.

The assets section of the VSIX looks like this:

<Assets>
    <Asset Type="Microsoft.VisualStudio.MefComponent" Path="|CKS.Dev11;AssemblyProjectOutputGroup|" />
    <Asset Type="SharePoint.Commands.v4" Path="|CKS.Dev11;SP2010CommandProjectOutputGroup|" />
    <Asset Type="SharePoint.Commands.v5" Path="|CKS.Dev11;SP2013CommandProjectOutputGroup|" />
    <Asset Type="Microsoft.VisualStudio.VsPackage" Path="|CKS.Dev11;PkgdefProjectOutputGroup|" />
    <Asset Type="Microsoft.VisualStudio.Assembly" Path="|CKS.Dev11;AssemblyProjectOutputGroup|" />
    <Asset Type="Microsoft.VisualStudio.ProjectTemplate"
           d:Source="Project"
           d:ProjectName="CKS.Dev11"
           d:TargetPath="p"
           Path="P"
           d:VsixSubPath="P" />
    <Asset Type="Microsoft.VisualStudio.ItemTemplate"
           d:Source="Project"
           d:ProjectName="CKS.Dev11"
           d:TargetPath="I"
           Path="I"
           d:VsixSubPath="I" />
  </Assets>

So the solution was to modify the MSBuild elements for the CSProj file

<Target Name="GetVsixTemplateItems">
   <ItemGroup>
     <VSIXSourceItem Include="@(IntermediateZipItem)">
       <VSIXSubPath>I\%(IntermediateZipItem.Language)\$(ItemTemplateFolderPath)\%(IntermediateZipItem.Culture)</VSIXSubPath>
     </VSIXSourceItem>
   </ItemGroup>
   <ItemGroup>
     <VSIXSourceItem Include="@(IntermediateZipProject)">
       <VSIXSubPath>P\%(IntermediateZipProject.Language)\%(IntermediateZipProject.OutputSubPath)\%(IntermediateZipProject.Culture)</VSIXSubPath>
     </VSIXSourceItem>
   </ItemGroup>
</Target>

 

The item templates are using a variable declared with the category (aka the folder path) desired, whereas the projects are declared using the ‘Category’ in the properties window of the VSTemplate item. (this is the output sub path for those interested).

The path has to be declared in another element as it contains a space.

<PropertyGroup>
    <ItemTemplateFolderPath>Office SharePoint\CKSDev</ItemTemplateFolderPath>
  </PropertyGroup>

All that together gives the desired effect of all the CKSDev items appearing inside the right OOTB category.

image

image

Ok so now everything is sitting in the desired place. As with a lot of CKSDev coding its the little things like this which take so long. Understanding Visual Studio, MSBuild and SP API’s all in a days work Smile with tongue out

Project templates

 

So with the information above the project template settings are actually a lot easier.

Simply set your category to ‘Office\SharePoint Solutions’ As you actually want it to appear in a sub folder this time.

image

I hope this saves someone time and headaches Open-mouthed smile

SPC125 – Hybrid and Search in the Cloud – Brad Stevenson

SPCLogo

Brad Stevenson talks about creating a search experience which spans an On-Premises and Office365 SharePoint 2013 environment.

Search in the cloud

 

So what is the story about the cloud search capability?

Comparing SP2010 online to SP2013 online

 

Area

2010

2013

Crawl Freshness

1-2 hours <15 minutes

Query Latency

good better

Scale

was limited now much greater

Manageability

was limited more extensive

User Experience

ok big UX improvements

Extensibility

very little some new stuff

 

  • Crawl freshness is important for any search system as its important that a user trusts the results they are given. Part of this trust is that they are seeing the latest content within a timely window.
  • Query latency is already really quite snappy in SP2010 online. SP2013 moves to client side rendering approaches to improve this snappiness perception even further. This approach allows the server to share some of the rendering load with the client device improving performance for the end user.
  • Scale in SP2013 originates from the FAST technologies so brings those benefits to bear, making it a powerful and scalable platform solution.
  • Manageability within SP2013 allows more control over the schema, examples are the control over managed properties and result sources. A lot of the features which were part of the service application have now been brought down to the site collection and tenant administration levels.
  • User Experience is dramatically different with new capabilities such as hover panels, visual refinements etc. This helps a user to establish the relevance of a result without leaving the results page or downloading the documents.
  • Extensibility is improved without writing code with such elements such as the rendering templates replacing the complex XSL.

Search extensibility in the cloud

 

No code:

  • Managed properties
  • Result sources
  • Query rules
  • Result types
  • Display templates
  • Dictionaries (via the Term Store)

Code:

  • CSOM
  • REST

Packaging:

  • Import/Export search settings
  • SharePoint apps

You manage your ‘global search’ via the tenant admin interface. The only major piece of the service application settings you have no control over is the crawl scheduling. In a multi-tenant environment this really makes sense.

Journey to the cloud

 

Definition of the cloud?

 

  • Public cloud – Office 365, allows you to focus on just the software services.
  • Private cloud – Windows Azure, allows you to offload the OS and hardware to the cloud provider.
  • Traditional – All managed by the internal organisation

Moving to the cloud

 

What to move? (not just everything) and should it be everything including customisations and settings.

When to move it? How do you plan the move? Is it an all or nothing or staged co-existence.

How to move it? What tools are available?

The migration lifecycle

 

Early – 90% on-prem 10% cloud

Mid – 50% on-prem 50% cloud

Late – 10% on-prem 90% cloud

How hybrid search can help

 

User want to easily find content, they just want to find things they’re looking for and not have to think about understanding the systems structure. It is about getting their job done efficiently.

Users don’t care about migration. So don’t force users to track what’s being moved and when.

Realise that most users will never move EVERYTHING to the cloud.

Hybrid Search User Experience

 

Demo environment details:

  • On-Premises SP2013 crawling a mixture of SharePoint data and file shares
  • Office365 indexing all of its content
  • Firewall between

So the idea is that from within either environment the user can get results from either. They use query rules to ‘cross-pollinate’ results from the other environment as a block of results. Personally I’m not sure this is a great user experience. It gives a false impression to a user of which results are most important. So I remain to be convinced about using result blocks.

A neat thing to know is that the refinement panel operates over ALL returned results rather than just the local SharePoint items. Also the hover panels are dependant on the sources WOPI configuration.

Configuring Hybrid Search

 

Configuration steps:

  • Choose one-way or bi-directional
  • Deploy and configure pre-reqs
  • Configure search data settings
  • Configure search UI settings

If you are early on in your migration lifecycle a one-way where on-premises indexes Office365 might suit your needs. Or late on a one-way works for Office 365 to use on-premises. Mid-life is definitely bi-directional where the experience should be the same.

Environment Configuration

 

image

One-way or Bi-directional

 

Where will users go to search?

  • Just on-premises
  • Just Office365
  • Both

Hybrid pre-requisites

 

Non-SharePoint:

  • Reverse proxy and certificate authentication
  • Identity provider (ADFS or Shibboleth for Office365)
  • MSOL Tools
  • SSO with Office365
  • DirSync

SharePoint:

  • New SharePoint STS Token signing certificate
  • Configure a trust relationship between SharePoint on-premises and ACS
  • Configure secure store
  • Configure UPA

Configure Data Settings

 

Result source (equivalent to a federated location and scope in SP2010) pointing at:

  • URL of remote location
  • Secure Store (for client certificate)

Configure UI Settings

 

  • Query rule to show remote results in ‘Everything’ tab
  • Search vertical which ‘only’ displays results from remote location (Office365 or on-premises)

Search Centre On-Premises: Data Flow

 

image

Scenario One:

User logs into on-premises and issues a search query. It actually issues two queries. First is to the local on-premises index. The second is issued to Office365. The second query is issued through the CSOM endpoint within Office365. Identity mappings take place where the on-premises identity is mapped to the Office365 identity. Office365 then performs the query and issues the results response.

Search Centre in Office365: Data Flow

 

image

Scenario Two:

Basic flow is a reverse of on-premises except there is the introduction of the revers proxies at the perimiter to route the request back to the on-premises SharePoint. Identity is mapped from the Office365 user to the on-premises user.

This means that in both scenarios there is correct data security trimming.

Beyond the Basics

Design considerations

 

What did Microsoft consider when designing the Office365 service:

  • Crawl versus Query – Chosen to go the query root as the crawl infrastructure within Office365 was limiting. Also hundreds of thousands of tenants need to have a consistent performance maintained. Query helps to provide the best and most consistent user experience.
  • UI Presentation – Users felt it was really important to see all the results on the same page. They didn’t want to have to switch between different pages. (I’m not sure I agree with this UX being the most optimum, users are used to choosing the ‘type’ of data they want on Bing/Google eg. Images/Shopping etc)
  • Relevance and clicks – Search learns over time. Search watches which results are clicked for specific queries and adjusts over time.
  • Background compatibility – Should provide other MOSS and SP2010 on-premises hybrid. This was not possible due to some significant infrastructure and services which would have required significant investment in extending the previous SharePoint versions. One major element was the challenge of identity mapping.

Alternative Hybrid User Experience

 

The demonstration showed the example of a glossary. This is stored in a SharePoint list on-premises. It is very useful information, but is not something users query for all the time. From on-premises the demo pulls ‘people’ results from Office365 via a dedicated search vertical.

To get the on-premises ‘people’ page to use the Office365 profiles it is as simple as pointing the web part settings to use the remote result source. This means when the users click the people results it goes to the Office365 profile page. Question is what happens to the links to people in normal item results?

Hit highlighting works.

Question about multiple people stores. Answer was the suggested best practice is to host in just one location.

Result Click-Through: Options

 

image

For users on-premises accessing on-premises links are fine. Once the user is on the Office365 search results the results now have internal only urls. The click through doesn’t route through the reverse proxy anymore. So users must have that external access to the internal system.

One solution is to have a VPN or Direct Access or leveraging the reverse proxy for url re-writing.

Microsoft recommend using VPN or Direct Access as it is easier to maintain over time.

The WOPI previews are operating where-ever the content is being served from.

CKSDev for Visual Studio 2012 version 1.0 Released

CKSLogo

Since 2009 CKSDev has been available to SharePoint 2010 developers to aid the creation of SharePoint solutions. The project extended the Visual Studio 2010 SharePoint project system with advanced templates and tools. Using these extensions you were able to find relevant information from your SharePoint environments without leaving Visual Studio. You experienced greater productivity while developing SharePoint components and had greater deployment capabilities on your local SharePoint installation.

The Visual Studio 2012 release of the MS SharePoint tooling supports both SharePoint 2010 and 2013. Our aim for CKSDev was to follow this approach and provide one tool to support both SharePoint versions from within Visual Studio 2012.

The VS2012 version has just been released. It contains the core quick deployment and server explorer features you had available in VS2010. All of the other features will be coming over the coming weeks as personal time allows. As you can imagine it’s no mean feat to port all of the existing features and support multiple SharePoint versions. The code base is also on a diet as code bloat was getting a little bit crazy as the tools evolved over the past 4 years.

You can find CKS: Development Tools Edition on CodePlex at http://cksdev.codeplex.com.

Download the extension directly within VS2012 from the extension manager or visit the Visual Studio Gallery.

We’re still looking for extension ideas so please post in the CKS Dev Discussions anything you think would make life better for SharePoint developers.

CKS Dev Feature highlights

Environment:
  • Copy assembly name so no more need to use reflector to get the full strong name.
Exploration
  • New Server Explorer nodes to list feature dependancies, site columns, web part gallery, style library, theme gallery.
  • Enhanced import functionality for web parts, content types, themes.
  • Create page layout from content type (publishing).
  • Copy Id.
Content
  • Coming Soon…
Deployment
  • Quick deploy to copy artefacts into the SharePoint root without the need for a build.
  • Deployment steps Coming Soon..

To see more details and the full feature list visit http://cksdev.codeplex.com/documentation.

Visit the CodePlex site for more information about the features and release.

Share and Enjoy and thanks for the continued support