Last Update: 7/14/2017

The TFS Database Import Service for Visual Studio Team Services is currently in preview.

Import

This page walks through how to perform all of the necessary preparation work required to get an import to Team Services ready to run. If you encounter errors during the process be sure to review the troubleshooting and advanced topics documentation.

Validating a Collection

Now that you’ve confirmed you’re on the latest version of TFS the next step is to validate each collection you wish to migrate to Team Services. Validate will examine a variety of aspects in your collection, including, but not limited to: size, collation, identity, and processes. Running a validation is done through TfsMigrator. To start, take a copy of TfsMigrator and copy it onto one of your TFS server’s Application Tiers (AT). Once there you can unzip it. The tool can also be run from the TFS AT as long as this PC can connect to the TFS instance's configuration database – example below.

To get started, open a command prompt on the server and CD to the path where you have TfsMigrator placed. Once there it’s recommended that you take a second to review the help text provided with the tool. Run the following command to see the top level help and guidance:

TfsMigrator /help

For this step, we’ll be focusing on the validate command. To see the help text for that command simply run:

TfsMigrator validate /help 

Since this is our first time validating a collection we’ll keep it simple. Your command should have the following structure:

TfsMigrator validate /collection:{collection URL}

For example, if one was to run this command against their default collection the command would look like:

TfsMigrator validate /collection:http://localhost:8080/tfs/DefaultCollection

Running it from a machine other than the TFS server requires the /connectionString parameter. The connection string parameter is a pointer to your TFS server's configuration database. As an example, if the prepare command was being run by the Fabrikam corporation the command would look like:

TfsMigrator validate /collection:http://fabrikam:8080/tfs/DefaultCollection /tenantDomainName:fabrikam.OnMicrosoft.com /connectionString:"Data Source=fabrikamtfs;Initial Catalog=Tfs_Configuration;Integrated Security=True"

Executing the validate command will have TfsMigrator go through the entire collection and check for potential migration issues. It’s important to note that TfsMigrator DOES NOT edit any data or structures in the collection. It also DOES NOT try to send any data back to Microsoft. It only reads the collection to identify issues.

Once the validation is complete you’ll be left with a set of log files and a set of results printed to the command prompt screen.

TfsMigrator validate output

If all validations passed, then the collection is ready to import and you can safely move on to generating the required import files. Validating import file output can be safely ignored for now – it’s covered later on. If TfsMigrator flagged any errors, they will need to be corrected before moving on. See troubleshooting for guidance on correcting validation errors.

When you open up the log directory you will notice that there are several logging files.

Logging files generated by TfsMigrator

The log titled TfsMigrator.log is going to be the main log which contains details on everything that was run. To make it easier to narrow down on specific areas, a log is generated for each major validation operation. For example, if TfsMigrator had reported an error in the “Validating Project Processes” step, then one can simply open the ProjectProcessMap.log file to see everything that was run for that step instead of having to scroll through the overall log. The TryMatchOOProcessMatch.log should be ignored if you have applied any customizations to your projects' processes. It's meant to confirm if your collection is eligible to start using the inherited process management model after migration.

If you do hit a failure we would like to ask that you zip up your logs from the run and send them to vstsdataimport@microsoft.com. This helps us identify areas of future investment for our migration pipeline.

Generating Import Files

By this point you will have run TfsMigrator validate against the collection that you plan to migrate and It should be returning "All collection validation passed". This is a great message! It means that your collection is ready to import to Team Services. But, before you start taking the collection offline and notifying your co-workers about the upcoming migration, there is one more bit of preparation that needs to be completed – generating the import files. These two files specify your identity map between Active Directory (AD) and Azure Active Directory (AAD), and the import specification that will be used to kick off your migration.

Prepare Command

The prepare command assists with generating the required import files. Essentially, this command scans the collection to find a list of all users to populate the identity map and then tries to connect to AAD to find each identity’s match. If your company has employed the Azure Active Directory Connect tool (formerly known as the Directory Synchronization tool, Directory Sync tool, or the DirSync.exe tool), then TfsMigrator should be able to auto-populate the mapping file. The Import specification file is simply a near empty file which you'll need to fill out prior to import.

Unlike the validate command, prepare DOES require an internet connection as it needs to reach out to AAD in order to populate the identity mapping file. If your TFS server doesn't have internet access, you'll need to run the tool from a different PC that does. As long as you can find a PC that has an intranet connection to your TFS server and an internet connection then you can run this command. Run the following command to see the guidance for the prepare command:

TfsMigrator prepare /help

Included in the help documentation are instructions and examples for running TfsMigrator from the TFS server itself and a remote PC. If you're running the command from one of the TFS server's Application Tiers (ATs) then your command should have the following structure:

TfsMigrator prepare /collection:{collection URL} /tenantDomainName:{name}

If you're not running it from the TFS server, then the command will have the following structure:

TfsMigrator prepare  /collection:{collection URL} /tenantDomainName:{name} /connectionString:"Data Source={sqlserver};Initial Catalog=Tfs_Configuration;Integrated Security=True"

The connection string parameter is a pointer to your TFS server's configuration database. As an example, if the prepare command was being run by the Fabrikam corporation the command would look like:

TfsMigrator prepare /collection:http://fabrikam:8080/tfs/DefaultCollection /tenantDomainName:fabrikam.OnMicrosoft.com /connectionString:"Data Source=fabrikamtfs;Initial Catalog=Tfs_Configuration;Integrated Security=True"

Upon executing this command, TfsMigrator will run a complete validate to ensure that nothing has changed with your collection since the last full validate. If any new issues are detected, then the import files will not be generated. Shortly after the command has started running an AAD login window will appear. You will need to sign in with an identity that belongs to the tenant domain specified in the command. It's important to make sure that the AAD tenant specified is the one you want your future Team Services account to be backed with. For our Fabrikam example the user would enter something similar to what's shown in the below image.

AAD login prompt

A successful run of TfsMigrator prepare will result in a set of logs and import files.

Import files generated by TfsMigrator

After opening the log directory noted in TfsMigrator's output you will notice that there are two files and a Logs folder. IdentityMap.csv contains the generated mapping of AD to AAD identities. import.json is the import specification file which needs to be filled out. It's recommended that you take time to fill out the import specification file and review the identity mapping file for completeness.

Import Specification File

The import specification is a JSON file that serves as the master import file and provides information such as the desired account name, subscription, account region, storage account information, and location of the identity mapping file. Most of fields are auto-populated, some fields require user input prior to attempting an import.

Newly generated import specification file

Here is the breakdown of the fields and what action needs to be taken:

Field Explanation Action
Source Information detailing location and names of source data files used for import. None - Review information for subfield actions below.
Target Information detailing desired region and name for the new Team Services account. None - Review information for subfield actions below.
ValidationData Contains data related to database size, collation, and usage. None – It contains data that is captured during import.
Files Name of the files containing import data. None - Review information for subfield actions below.
Target Properties describing the new Team Services account to import into. None - Review information for subfield actions below.
AccountName Desired name for the account that will be created during the import. Select a name. This name can be quickly changed later after the import has completed. Note – do NOT create the account before running the import. The account will be created as part of the import process.
Region Region that your account will be hosted in. Review the list of regions in the generated files text. Replace the entry for this value with the short name of the region you desire the account to reside in.
Location SAS key to the Azure storage account hosting the DACPAC and identity mapping file. None – This will be covered in a later step.
Dacpac A file that packages up your collection database that is used to bring the data in during import. None - In a later step you'll generate this file using your collection and will upload it to an Azure storage account. It will need to be updated based on the name you use when generating the DACPAC later in this process.
IdentityMapping Name of the identity mapping file to use. None - In a later step you'll upload this file along with the DACPAC to an Azure storage account. If you change the name of the file be sure to update it here as well.
ImportCode Code given out during the preview to allow an import to be queued. None - In a later step you'll add this to the import specification.

It's important to note that if you have selected to import your collection into a region outside of the United States or Europe, then your data will be held in a secured location in the United States for up to 7 days as a staging point for the data import process. After that period has ended your staged data will be deleted.

After following the above instructions, you should have a file that looks somewhat like the below.

Half filled out import specification file

In this case, the user planning the Fabrikam import added the account name "Fabrikam-Import" and selected the Central US region in the Target object. Other values were left as is to be modified just before taking the collection offline for the migration.

Supported Azure Regions for Import

Team Services is available in a multitude of Azure regions. However, not all Azure regions that Team Services is present in are supported for import. The below table details the Azure regions that can be selected for import. Also included is the value which needs to be placed in the import specification file to target that region for import.

Geographic Region Azure Region Import Specification Value
United States Central United States CUS
Europe Western Europe WEU
Australia Australia East EAU
South America Brazil South SBR
Asia Pacific South India MA

Identity Map

Arguably the identity map is of equal importance to the actual data that you will be migrating to Team Services. Before opening the file it's important to understand how identity import operates and what the potential results could entail. When importing an identity, they could either end up becoming active or historical. The difference between active and historical identities is that active identities can log into VSTS whereas historical identities cannot. It's important to note that once imported as a historical identity, there is no way to move that identity to become active.

When reviewing and editing the identity mapping file in Excel, ensure that the file is saved as a comma delimited CSV. Mapping files that are saved using a non-comma delimiter can't be used for import.

Active Identities

Active identities refer to identities that will be users in Team Services post-import. On Team Services, these identities will be licensed and show up as a user in the account after migration. These identities will have a completed mapping between on-prem AD and hosted AAD in the identity mapping file.

Historical Identities

These are identities that do NOT have completed mappings specified in the identity mapping file. This can either mean that there is no line entry present in the file for that identity or it could also be the case that there is a line in the file for that identity, but it isn't completely filled out. For example, no AAD user principal name was provided for a user. Historical Identities do NOT have access to a Team Services account after migration, do NOT have a licenses, and do NOT show up as a user in the account. All that is persisted is the notion of that identity's name in the account. This way their history can be searched at a later date. It's recommended that historical identities be used for users that are no longer at the company or won't ever be needing access to the Team Services account. Identities imported historically CANNOT be migrated later to become active identities.

Understanding an Identity Map

It's recommended that the identity mapping file be opened up in Excel. This will make it easier to both read and make edits. After opening the file, you will be presented with something similar to the below example.

Identity mapping file generated by TfsMigrator

The table below explains what each column is used for.

Column Explanation
User Friendly display name used by the identity in TFS. Makes it easier to identify which user the line in the map is referencing.
AD:SecurityIdentifier[Source] The unique identifier for the on-prem AD identity in TFS. This column is used to identify users in the collection.
AAD:UserPrincipalName[Target] The identifier for the matching AAD identity. Entries in this column will show the identity that users will log into after the migration. Everything belonging to the TFS identity will be remastered to this AAD identity if the map is valid.
License Desired license the user should have after import.
License Assignment Override Used for overriding the value currently in the licensing column. See overriding licensing values for more details on using this column.
Status Indication of whether or not the identity mapped on this line is valid or not.
Validation Date Last time the identity map was validated.

Reading through the file you will notice the status column has either "OK" or "NO MATCH". OK indicates that it's expected that the identity on this row will map correctly on import and will become active. No matches will become historical identities on import. It's important that you review the generated mapping file for completeness and correctness.

Start by reviewing the correctly matched identities. Are all of the expected identities present? Are the users mapped to the correct AAD identity? If any values are incorrectly mapped or need to be changed then you'll need to contact your Azure AD administrator to check whether the on-premises AD identity is part of the sync to Azure AD and have setup correctly. Check the documentation on setting a sync between your on-premises AD and Azure AD.

Next, review the identities that are labeled as 'NO MATCH'. This implies that a matching AAD identity couldn't be found. This could be for one of four reasons.

  1. The identity hasn't been setup for sync between on-premises AD and Azure AD.
  2. The identity hasn't been populated in your AAD yet; new employee scenario.
  3. The identity simply doesn't exist in your AAD.
  4. The user that owned that identity no longer works at the company.

In the first two cases the desired on-premises AD identity will need to be setup for sync with Azure AD. Check the documentation on setting a sync between your on-premises AD and Azure AD. It's required that Azure AD Connect be setup and run for identities to be imported as active in Team Services.

For the second and third case, the row can be left or removed from the file. The end result will be the same case - a historical identity. It's recommended that you reduce the mapping file down to just the set of identities that you wish be active after import, for simplicity and readability.

The UserPrincipalName[Target] column CANNOT be manually updated. Users marked as "NO MATCH" will need to be investigated with your AAD admin to see why they aren't part of your directory sync.

License assignments populated by TfsMigrator's Prepare command can be overriden. Please see overriding licensing values for more details on how to change the assignments.

No Matched Identities

The identity import strategy proposed in this section should only be considered by small teams.

In cases where the Azure AD Connect hasn't been configured and run previously, you will notice that all users in the identity mapping file will be marked as 'NO MATCH'. Running an import with a complete set of no match identities will result in all users getting imported historically. It's strongly recommended that you configure Azure AD Connect to ensure that your users are imported as active.

Running an import with all no matches has consequences which need to be considered carefully. It should only be considered by teams with a small number of users were the cost of setting up an Azure AD Connect is deemed too high.

To import with all no matches, simply follow the steps outlined in later sections. When queuing an import, the identity that is used to queue the import will be bootstrapped into the account as the account owner. All other users will be imported historically. The account owner will then be able to add users back in using their AAD identity. Users added will be treated as new users. They will NOT own any of their history and there is no way to re-parent this history to the AAD identity. However, users can still lookup their pre-import history by searching for {domain}{AD username}.

TfsMigrator will warn if it detects the complete no match scenario. If you decide to go down this migration path you will need to consent in the tool to the limitations.

Visual Studio Subscriptions

TfsMigrator will not be able to automatically detect Visual Studio subscriptions when generating the identity mapping file. There are two ways to ensure that your users have their Visual Studio subscription benefits applied in Team Services post import:

  • Override License Assignments - Follow the instructions on overriding licensing values to specify Visual Studio subscriptions for the correct set of users.
  • Auto Upgrade Post Import - As long as a user's work account is linked correctly, Team Services will automatically apply their Visual Studio subscription benefits on their first login post import. You're never charged for other types of licenses assigned during import.

Getting Ready to Import

By this point you will have everything ready to execute on your import. You will need to schedule downtime with your team to the take the collection offline for the migration. Once you have an agreed upon a time to run the import you need to get all of the required assets you have generated and a copy of the database uploaded to Azure. This process has five steps:

  1. Take the collection offline and detach it.
  2. Generate a DACPAC from the collection you're going to import.
  3. Upload the DACPAC and import files to an Azure storage account.
  4. Generate a SAS Key to that storage account.
  5. Fill out the last fields in the import specification.

Detaching your Collection

Detaching the collection is a crucial step in the import processes. Identity data for the collection resides in the TFS server’s configuration database while the collection is attached and online. When a collection is detached from the TFS server it will take a copy of that identity data and package it up with the collection for transport. Without this data the identity portion of the import CANNOT be executed. Resources are available online to walk through detaching a collection. It's recommended that the collection stay detached until the import has been completed, as there isn't a way to import the changes which occurred during the import.

If you're running a dry run (test) import, it's recommended to reattach your collection after backing it up for import since you won't be concerned about having the latest data for this type of import. You could also choose to employ an offline detach for dry runs to avoid offline time all together. It's important to weigh the cost involved with going the zero downtime route for a dry run. It requires taking backups of the collection and configuration database, restoring them on a SQL instance, and then creating a detached backup. A cost analysis could prove that taking just a few hours of downtime to directly take the detached backup is better in the long run.

Generating a DACPAC

Important: Before proceeding, ensure that your collection was detached prior to generating a DACPAC. If you didn't complete this step the import will fail.

Data-tier Application Component Packages (DACPAC) is a feature in SQL server that allows database changes to be packaged into a single file and deployed to other instances of SQL. It can also be restored directly to Team Services and is therefore utilized as the packaging method for getting your collection's data in the cloud. You're going to use the SqlPackage.exe tool to generate the DACPAC. This tool is included as part of the SQL Server Data Tools.

When generating a DACPAC there are two considerations that you'll want to keep in mind, the disk that the DACPAC will be saved on and the space on disk for the machine performing the DACPAC generation. Before generating a DACPAC you’ll want to ensure that you have enough space on disk to complete the operation. While creating the package, SqlPackage.exe temporarily stores data from your collection in the temp directory on the C: drive of the machine you initiate the packaging request from. Some users might find that their C: drive is too small to support creating a DACPAC. Estimating the amount of space you'll need can be found by looking for the largest table in your collection database. As DACPACs are created one table at a time. The maximum space requirement to run the generation will be roughly equivalent to the size of the largest table in the collection's database. Running the below query will display the size of the largest table in your collection's database in MBs. Compare that size with the free space on the C: drive for the machine you plan to run the generation on.

SELECT TOP 1 OBJECT_NAME(object_id), sum(reserved_page_count) * 8/1024.0 as SizeInMb
FROM sys.dm_db_partition_stats
WHERE index_id = 1
GROUP BY object_id
ORDER BY SizeInMb DESC

Using the size output from the SQL command, ensure that the C: drive of the machine that will create the DACPAC has at least that much space. If it doesn't then you'll need to redirect the temp directory by setting an environment variable.

SET TEMP={location on disk}

Another consideration is where the DACPAC data is saved. Pointing the save location to a far off remote drive could result in much longer generation times. It's recommended that if a fast drive, such as an SSD, is available locally that you target that drive as the DACPAC's save location. Otherwise, it's always faster to use a disk that's on the machine where the collection database is residing over a remote drive.

Now that you've identified the target location for the DACPAC and ensured that you'll have enough space, it's time to generate the DACPAC file. Open a command prompt and navigate to the location where SqlPackage.exe is located. Taking the command example below, replace the required values and generate the DACPAC

SqlPackage.exe /sourceconnectionstring:"Data Source={database server name};Initial Catalog={Database Name};Integrated Security=True" /targetFile:{Location & File name} /action:extract /p:ExtractAllTableData=true /p:IgnoreUserLoginMappings=true /p:IgnorePermissions=true /p:Storage=Memory
  • Data Source - SQL Server instance hosting your TFS collection database.
  • Initial Catalog - Name of the collection database.
  • targetFile - Location on disk + name of DACPAC file.

Below is an example of the DACPAC generation command that is running on the TFS data tier itself:

SqlPackage.exe /sourceconnectionstring:"Data Source=localhost;Initial Catalog=Tfs_Foo;Integrated Security=True" /targetFile:C:\DACPAC\Tfs_Foo.dacpac /action:extract /p:ExtractAllTableData=true /p:IgnoreUserLoginMappings=true /p:IgnorePermissions=true /p:Storage=Memory

The output of the command will be a DACPAC that is generated from the collection database Tfs_Foo called Tfs_Foo.dacpac.

If you run into trouble generating the DACPAC or if your total collection data size is greater than 500GBs, please reach out to vstsdataimport@microsoft.com. We'll work with you to get your collection imported into Team Services.

Importing Large Collections

Importing using a SQL Azure VM must only be used if your collection database is above the recommended size below. Otherwise, use the DACPAC method outlined above.

DACPACs offer a fast and relatively simplistic method for moving collections into Team Services. However, once a collection database crosses the 150GB size threshold the benefits of using a DACPAC start to diminish. For databases over this size threshold, a different data packaging approach is required to migrate to Team Services.

Before going any further, it’s always recommended to see if old data can be cleaned up. Overtime collections can build up very large volumes of data. This is a natural part of the devops process. However, some of this data might no longer be relevant and doesn’t need to be kept around. Some common examples are older workspaces and build results. Cleaning older, no longer relevant artifacts, might remove a lot more space than one would expect. It could be the difference between using the DACPAC import method or having to use a SQL Azure VM. It's important to note that once you deleted older data that it CANNOT be recovered without restoring an older backup of the collection.

If you’re still unable to get the database under the DACPAC threshold then you will need to setup a SQL Azure VM to import to Team Services. There several steps involved in setting up a SQL Azure VM for migrating data to Team Services. We’ll walk through how to accomplish this end-to-end. Steps covered include:

  1. Setting up a SQL Azure VM
  2. Restoring your database on the VM
  3. Creating an identity to connect to the collection database
  4. Configuring your import specification file to use a SQL connection string
  5. Optionally, we recommend restricting access to just Team Services IPs

Setting up a SQL Azure VM can be done from the Azure portal with just a few clicks. Azure has a tutorial on how to setup and configure a SQL Azure VM. Follow that tutorial to ensure that your VM is configured correctly and SQL can be accessed remotely. Note, it’s important that you put your VM in the same Azure region that your future Team Services account will be residing. This will increase the import speed as all transfers will be within a data center.

Below are some recommended configurations for your SQL Azure VM.

  1. It's recommended that D Series VMs be used as they're optimized for database operations.
  2. Configure the SQL temporary database to use a drive other than the C drive. Ideally this drive should have ample free space; at least equivalent to your database's larget table.
  3. If your source database is still over 1TB after reducing the size then you will need to attach additional 1TB disks and combine them into a single partition to restore your database on the VM.
  4. Collection databases over 1TB in size should consider using Solid State Drives (SSDs) for both the temporary database and collection database.

Team Services is available in a multitude of regions across the globe. When importing to these regions it's critical that you place your data in the correct region to ensure that the import can start correctly. Setting up your SQL Azure VM in a location other than the ones recommended below will result in the import either failing to start or taking much longer than expected to complete.

Use the table below to decide where you should create you SQL Azure VM if you're using this method to import.

Desired Import Region SQL Azure VM Region
Central United States Central United States
Western Europe Western Europe
Australia East Australia East
Brazil South Brazil South
South India South India

While Team Services is available in multiple regions in the United States, only the Central United States region is accepting new Team Services accounts. Customers will not be able to import their data into other United States Azure regions at this time.

DACPAC customers should consult the region table in the uploading DACPAC and import files section. The above guidelines are for SQL Azure VMs only.

After setting up and configuring an Azure VM, you will need to take your detached backup from your TFS server to your Azure VM. Azure has several methods documented for how to accomplish this task. The collection database needs to be restored on SQL and doesn’t require TFS to be installed on the VM.

Once your collection database has been restored onto your Azure VM, you will need to configure a SQL login to allow Team Services to connect to the database to import the data. This login will only allow read access to a single database. Start by opening SQL Server Management Studio on the VM and open a new query window against the database that will be imported.

You will need to set the database’s recovery to simple:

ALTER DATABASE [<Database name>] SET RECOVERY SIMPLE;

Next you will need to create a SQL login for the database and assign that login the 'TFSEXECROLE':

USE [<database name>]
CREATE LOGIN <pick a username> WITH PASSWORD = '<pick a password>'
CREATE USER <username> FOR LOGIN <username> WITH DEFAULT_SCHEMA=[dbo]
EXEC sp_addrolemember @rolename='TFSEXECROLE', @membername='<username>'

Following our Fabrikam example the two SQL commands would look like the following:

ALTER DATABASE [Tfs_Foo] SET RECOVERY SIMPLE;

USE [Tfs_Foo]
CREATE LOGIN fabrikam WITH PASSWORD = 'fabrikamimport1!'
CREATE USER fabrikam FOR LOGIN fabrikam WITH DEFAULT_SCHEMA=[dbo]
EXEC sp_addrolemember @rolename='TFSEXECROLE', @membername='fabrikam'

Finally, the import specification file will need to be updated to include information on how to connect to the SQL instance. Open your import specification file and make the following updates:

Remove the DACPAC parameter from the source files object.

Before

Import specification before change

After

Import specification after change

Fill out the required parameters and add the following properties object within your source object in the specification file.

"Properties":
{
    "ConnectionString": "Data Source={SQL Azure VM IP};Initial Catalog={Database Name};Integrated Security=False;User ID={SQL Login Username};Password={SQL Login Password};Encrypt=True;TrustServerCertificate=True" 
}

Following the Fabrikam example, the import specification would look like the following after applying the changes:

Import specification referencing a SQL Azure VM

Your import specification is now configured to use a SQL Azure VM for import! Proceed with the rest of preparation steps to import to Team Services. Once the import has completed be sure to delete the SQL login or rotate the password. Microsoft does not hold onto the login information once the import has completed.

Optionally, but recommended is to further restrict access to their SQL Azure VM. This can be accomplished by allowing connections only from the set of Team Services IPs that are involved in the collection database import process. The IPs that need to be granted access to your collection database will depend on what region you're importing into. The tables below will help you identify the correct IPs. The only port that is required to be opened to connections is the standard SQL connection port 1433.

First, no matter what Team Services region you're import into the following IP must be granted access to your collection database.

Service IP
Team Services Identity Service 168.62.105.45

Next you will need to grant access to the TFS Database Import Service itself. Customers in Europe and the United States should select the service which is in their own region from the below table. Customers importing to locations outside of the United States and Europe must add an exception for the United States instance.

Service IP
Database Import Service - West Europe 40.115.43.138
Database Import Service - Central United States 52.173.74.9
Database Import Service - South Central United States 40.124.13.10

Then you will need to grant access to the Team Services instances in the region that you're importing into.

If you're importing into Western Europe you will need grant access for the following IP:

Service IP
Team Services - Western Europe 2 40.68.34.220

If you're importing into Central United States you will need to grant access for the following IPs:

Service IP
Team Services - Central United States 1 104.43.203.175
Team Services - Central United States 2 13.89.236.72

If you're importing into India South you will need to grant access for the following IP:

Service IP
Team Services - India South 104.211.227.29

If you're importing into Australia East you will need to grant access for the following IP:

Service IP
Team Services - Australia East 191.239.82.211

If you're importing into South Brazil you will need to grant access for the following IPs:

Service IP
Team Services - South Brazil 1 191.232.37.247
Team Services - South Brazil 2 13.75.145.145

Finally, if you're queuing the import from a machine other than your SQL Azure VM, you will need to grant an exception for that Machine's IP as well. It's recommended that you run the import command with the '/validateOnly' flag prior to queuing it. That allow you to quickly ensure if the firewall rules are working.

Uploading the DACPAC and Import Files

All of the files required to run the import need to be placed in an Azure storage container. This can be an existing container or one created specifically created for your migration effort. It's always recommend to create a new container as the Azure region that this container exists in matters when queuing an import.

Team Services is available in a multitude of regions across the globe. When importing to these regions it's critical that you place your data in the correct region to ensure that the import can start correctly. Place your data in a location other than the ones recommended below will result in the the import either failing to start or taking much longer than expected to complete.

Desired Import Region Storage Account Region
Central United States Central United States
Western Europe Western Europe
Australia East Central United States
Brazil South Central United States
South India Central United States

While Team Services is available in multiple regions in the United States, only the Central United States region is accepting new Team Services accounts. Customers will not be able to import their data into other United States Azure regions at this time.

Customers outside of the United States and Europe will need to place their data in Central United States. This is temporary as we work to put instances of the TFS Database Import Service in those countries. Your data will still be imported into your desired import region.

Creating a container can be done from the Azure portal. Once the container has been created you will need to upload the following files:

  • Identity Map CSV
  • Collection DACPAC

This can be accomplished using tools like AzCopy or any other Azure storage explorer tool.

Generating SAS Key

A shared access signature (SAS) key provides delegated access to resources in a storage account. This allows you to give Microsoft the lowest level of privilege required to access your data for executing the import. At a minimum we require both read and list permission to the container hosting the files you uploaded in the previous step. The SAS key can even be time limited to cut off access after a desired time period has passed. It's strongly recommended that you time limit the key last for a minimum of seven days.

There are several ways to generate a SAS key. The recommended way is to use the Microsoft Azure Storage Explorer. After installing the tool you can complete the following steps to generate a SAS Key:

  1. Connect your storage account to the tool by using one of the two account keys
  2. Once the storage account has been connected you can expand out the list of blob containers in the account
  3. Right click on the blob container that contains your import files and select "Get Shared Access Signature..."
  4. Ensure that read and list permissions are selected and extend the expiration time for the key. It's recommended that your SAS Key be valid for at least 7 days

Microsoft Azure Storage Explorergit

  1. Click create and copy the URL link provided

You will input the newly generated SAS Key into your import specification file as the "PackageLocation" parameter.

Completing the Import Specification

Earlier in the process you partially filled out the import specification file generally known as import.json. At this point you have enough information to fill out all of the remaining fields expect for the import code. The import code will be covered in the import section below. Open your import specification file and fill out the following fields.

  • PackageLocation - Place the SAS key generated from the script in the last step here.
  • DacpacFile - Ensure the name in field is the same as the DACPAC file you uploaded to the storage account. Including the ".dacpac" extension.
  • IdentityMapFile - Ensure the name in the field is the same as the identity mapping file you uploaded to the Azure storage container. Including the ".csv" extension.

Using the Fabrikam example, the final import specification file should look like the following:

Completed import specification file

Now you're ready to actually queue an import to Team Services!

Running an Import

The great news is that your team is now ready to begin the process of running an import. It's recommended that your team start with a dry run import and then finally a production run import. Dry run imports allow your team to see how the end results of an import will look, identify potential issues, and gain experience before heading into your production run. To queue imports you will need to use one of the import codes that was given to your team as part of the preview.

Before proceeding, ensure that you’ve received your import codes for the TFS Database Import Service preview. Be sure to download the migration guide as requesting invitation codes is covered in Phase 1 within the guide.

Considerations for Roll Back Planning

A common concern that teams have for the final production run is to think through what the rollback plan will be if there is anything goes wrong with import. This is also why we highly recommend doing a dry run to make sure you are able to test the import settings and identity map that you provide to the TFS Database Import Service.

Rollback for the final production run is fairly simple. Before you queue the import, you will be detaching the team project collection from Team Foundation Server which will make it unavailable to your team members. If for any reason, you need to roll back the production run and have Team Foundation Server come back online for your team members, you can simply attach the team project collection on-premises again and inform your team that they will continue to work as normal while your team regroups to understand any potential failures.

Determining the Type of Import

Imports can either be queued as a dry or production run. Dry runs are for testing and production runs are when your team intends to use the account full time in Team Services once the import completes. Determining which type of import to be run is based off the code that you provide in the import specification file. You will have two import codes; one for a dry run and the other for a production run. Select the code that matches the type of run you wish to queue and place it in the import specification file in the “ImportCode” parameter.

Completed import specification file with import code

Each import code is valid until an import has been successfully completed. The same code may be used if the import failed and you need to queue it again.

Queueing an Import

Important: Before proceeding, ensure that your collection was detached prior to generating a DACPAC or uploading the collection database to a SQL Azure VM. If you didn't complete this step the import will fail.

Starting an import is done by using TfsMigrator's import command. The import command takes an import specification file as input. It will parse through the file to ensure the values which have been provided are valid, and if successful, it will queue an import to Team Services.

To get started, open a command prompt and CD to path where you have TfsMigrator placed. Once there it’s recommended that you take a second to review the help text provided with the tool. Run the following command to see the guidance and help for the import command:

TfsMigrator import /help

The command to queue an import will have the following structure:

TfsMigrator import /importFile:{location of import specification file}

Here is an example of a completed import command:

TfsMigrator import /importFile:C:\TFSDataImportFiles\import.json

Once the validation passes you will be asked to sign into to AAD. It’s important that you sign in with an identity that is a member of the same AAD as the identity mapping file was built against. The user that signs in will become the owner of the imported account.

After the import starts the user that queued the import will receive an email. Shortly after that the team will be able to navigate to the import account to check on the status. For now, it will show a 503 offline for data import message. Once the import completes your team will be directed to sign in. The owner of the account will also receive an email when the import finishes.

Dry Run Accounts

Dry run imports help teams to test the migration of their collections. It’s not expected that these accounts will remain around forever, but rather to exist for a small timeframe. In fact, before a production migration can be run a complimenting dry run account will need to be deleted. All dry run accounts have a limited existence and will be automatically deleted after a set period of time. When the account will be deleted is included in the success email received after the import completes. Be sure to take note of this date and plan accordingly. Once that time period passes the dry run account will be deleted. If your team is ready to perform a production migration before then you will need to manually delete the account.

Be sure to check out the post import documentation for additional details on post import activities. Should your import encounter and problems, be sure to review the import troubleshooting steps.