Rules to Better DevOps

​If you still need help, ​visit our DevOps consulting page​ ​and book in a consultant.​​​

Hold on a second! How would you like to view this content?
Just the title! A brief blurb! Gimme everything!
  1. Do you know what the goal of DevOps is?


    You should know what's going on with your errors and usage.

    The goal should be: 

    A client calls and says: "I'm having problems with your software."

    Your answer: "Yes I know. Each morning we check the health of the app and we already saw a new exception. So I already have an engineer working on it."​

    Take this survey to find out your DevOps index:

  2. DevOps – Stage 1: Do you know what things to measure?

    ​Before you begin your journey into DevOps, you should assess yourself and see where your project is at and where you can improve.​​​​​​

    Take this survey to find out your DevOps index:​​​

    DevOps Survey.png
    Figure: DevOps Survey​
    Figure: If you prefer, you can download and print this survey in PDF​
  3. DevOps – Stage 2: Do you know what things to automate?

    ​Once you’ve identified the manual processes in Stage 1, you can start looking at automation. The best tool for build and release automation is Azure DevOps.

    ​​See our Rules to Better Continuous Deployments with TFS.

    Figure: In Azure DevOps you can automate application deployment to a staging environment and automatically run tests before deploying to production​
  4. DevOps – Stage 3: Do you know what metrics to collect?

    Now that your team is spending less time deploying the application, you’ve got more time to improve other aspects of the application, but first you need to know what to improve. 

    Here are a few easy things to gather metrics on:

    Application Logging (Exceptions)

    See how many errors are being produced, aim to reduce this as the produce matures

    But it's not only exceptions you should be looking at but also how your users are using the application, so you can see where you should invest your time

    • Application Insights -
    • Google Analytics
    • (Pulse)

    Application Metrics

    Application/Server performance – track how your code is running in production, that way you can tell if you need to provision more servers or increase hardware specs to keep up with demand


    ​​​​Figure: Application Insights gives you information about how things are running and whether there are detected abnormalities in the telemetry


    ​Figure: Azure can render the Application Insights data on a nice dashboard so you can get a high level view of your application

    2020-03-24_15-28-22.jpg​​​​​Figure: App Center can let you monitor app install stats, usage and errors from phones just like an app running in Azure

    Process Metrics

    Collecting stats about the application isn't enough, you also need to be able to measure the time spent in the processes used to develop and maintain the application. You should keep an eye on and measure:

    • Sprint Velocity
    • Time spent in testing
    • Time spent deploying
    • Time spent getting a new developer up to speed
    • Time spent in Scrum ceremonies
    • Time taken for a bug to be fixed and deployed to production

    Code Metrics

    The last set of metrics you should be looking at revolves around the code and how maintainable it is. You can use tools like:

  5. DevOps – Stage 4: Do you continually improve processes?

    ​​​​Now that you’ve got the numbers, you can then make decisions on what needs improvement and go through the DevOps cycle again.​

    Here are some examples:​

    • For exceptions, review your exception log (ELMAH, RayGun, HockeyApp)
      • Add the important ones onto your backlog for prioritization​
      • Add an ignore to the exceptions you don't care about to reduce the noise (e.g. 404 errors)
      • You can do this as the exceptions appear, or prior to doing your Sprint Review as part of the backlog grooming
      • ​You don't have to get the exception log down to 0, just action the important ones and aim to reduce the noise so that the log is still useful
    • ​For code quality, add getting Code Auditor and ReSharper to 0 on files you’ve changed to your Definition of Done
    • For code quality, add SonarQube and identify your technical debt and track it
    • For application/server performance, add automated load tests, add code to auto scale up on Azure
    • For application usage, concentrate on features that get used the most and improve and streamline those features
  6. Do you evaluate the processes?

    Often an incorrect process is the main source of problems. Developers should be able to focus on what is important for the project rather than getting stuck on things that cause them to spin their wheels.

    1. Are devs getting bogged down in the UI?
    2. Do you have continuous integration and deployment?
    3. Do you have a Schema Master?
    4. Do you have a DevOps​ Master?
    5. Do you have a Scrum Master?

    Note: Anyway keep this brief since it is out of scope. If this step is problematic, there are likely other things you may need to discuss with the developers about improving their process. For example, are they using Test Driven Development, or are they checking in regularly, but all this and more should be saved for the Team & Process Review.

  7. Do you know how DevOps fits in with Scrum?

    DevOps and Scrum compliment each other very well. Scrum is about inspecting and adapting with the help of the Scrum ceremonies (Standup, Review, Planning and Retro). With DevOps it's all about Building, Measuring and Improving with the help of tools and automation.
    Figure: Traditional Scrum Process
    Figure: Scrum with DevOps

    With DevOps, we add tools to help us automate slow process like build and deployment then add metrics to give us numbers to help quantify our processes. Then we gather the metrics and figure out what can be done to improve.

    ​​For example with Exception Handling, you may be using a tool like​ or Elmah and have 100s of errors logged in them. So what do you do with these errors? You can:

    1. Add each one to your backlog
    2. Add a task to each sprint to "Get exceptions to 0"​​​

    The problem with the above is that not all exceptions are equal, and most of the time they are not more important than the planned PBIs being worked on. No developers like working a whole sprint just looking at exceptions. What should happen is:

    1. Have the exceptions visible in your development process (i.e. using Slack, adding as something to check before Sprint Planning)
    2. Triage the exceptions, either add them to the backlog if they are urgent and important
    3. Add ignore filters to the exception logging tool to ignore errors you don't care about (e.g. 404s)
    4. Prioritize the exceptions on the backlog

    ​The goal here is to make sure you're not missing important and to reduce the noise. You want these tools to help support your efforts and make your more productive and not just be another time sink.

  8. Do you know why you want to use Application Insights?

    Knowing the holistic health of your application is important once it has been deployed into production. Getting feedback on your Availability, errors, performance,​ and usage is an important part of DevOps.
    We recommend using Application Insights, as getting it set up and running is quick, simple and relatively painless.

    Application Insights will tell you if your application goes down or runs slowly under load. If there are any uncaught exceptions, you'll be able to drill into the code to pinpoint the problem. You can also find out what your users are doing with the application so that you can tune it to their needs in each development cycle.

    Figure:  When developing a public website, you wouldn't deploy without Google Analytics to track metrics about user activity.
    Figure: For similar reasons, you shouldn't deploy a web application without metric tracking on performance and exceptions
    1. You need a portal for your app
    2. You need to know spikes are dangerous
    3. You need to monitor:
      1. Errors
      2. Performance
      3. Usage
    Figure: Spikes on an Echidna are dangerous
    Figure: Spikes on an Echidna are dangerous 
    Spikes on a graph are dangerous
    Figure: Spikes on a graph are dangerous

    To add Application Insights to your application, make sure you follow the rule Do you know how to set up Application Insights?

    Can't use Application Insights? Check out the following rule Do you use the best exception handling library ?​​

  9. Do you know how to analyse your web application usage with Application Insights?

    You've set up your Application Insights as per the rule 'Do you know how to set up Application Insights.

    Your daily failed requests are down to zero & You've tightened up any major performance problems.​​

    Now you will discover that understanding your users' usage within your app is child's play.

    The Application Insights provides devs with two different levels of usage tracking. The first is provided out of the box, made up of the user, session, and page view data. However, it is more useful to set up custom telemetry, which enables you to track users effectively as they move through your app.

    Figure: The most frequent event is someone filling out their timesheet.

    It is very straightforward to add these to an application by adding a few lines of code to the hot points of your app. Follow this link to read more (

    Feel constricted by the Application Insights custom events blade? Then you can export your data and display it in PowerBI in a number of interesting ways. 

    Sugarlearning PowerBi.png
    Figure: Power BI creates an easy to use and in-depth dashboard for viewing the health of the application 

    Previously we would have had to perform a complicated set up to allow Application Insights and Power BI to communicate. (Follow this link to learn more). Now it is as easy as adding the Application Insights content pack. 
    Figure: Content packs make it simple to interact and pull data from third-party services
  10. Do you know how to find performance problems with Application Insights?

    ​​​​Once you have set up your Application Insights as per the rule 'Do you know how to set up Application Insights' and you have your daily failed requests down to zero, you can start looking for performance problems. You will discover that uncovering your performance related problems are relatively straightforward.​​

    The main focus of the first blade is the 'Overview timeline' chart, which gives you a birds eye view of the health of your application.

    Figure: There are 3 spikes to investigate (one on each graph), but which is the most important? Hint: look at the scales!

    Developers can see the following insights:

    • Number of requests to the server and how many have failed (First blue graph)
    • The breakdown of your page load times (Green Graph)
    • How the application is scaling under different load types over a given period
    • When your key usage peaks occur

    Always investigate the spikes first, notice how the two blue ones line up? That should be investigated, however,​ notice that the green peak is actually at 4 hours. This is definitely the first thing we'll look at.

    performance 2.png
    Figure: The 'Average of Browser page load time by URL base' graph will highlight the slowest page.

    As we can see that a single request took four hours in the 'Average of Browser page load time by URL base' graph, it is important to examine this request.

    It would be nice to see the prior week for comparison, however, we're unable to in this section.

    Figure: In this case, the user agent string gives away the cause, Baidu (a Chinese search engine) got stuck and failed to index the page.

    At this point, we'll create a PBI to investigate the problem and fix it.

    (Suggestion to Microsoft, please allow annotating the graph to say we've investigated the spike)

    The other spike which requires investigation is in the server response times. To investigate it, click on the blue spike. This will open the Server response blade that allows you to compare the current server performance metrics to the previous weeks. 

    Figure: In this case, the most important detail to action is the Get Healthcheck issue. Now you should be able to optimise the slowest pages​

    In this view, we find performance related issues when the usage graph shows similarities to the previous week but the response times are higher. When this occurs, click and drag on the timeline to select the spike and then click the magnifying glass to ‘zoom in’. This will reload the ‘Average of Server response time by Operation name’ graph with only data for the selected period.

    Looking beyond the Average Response Times

    High average response times are easy to find and indicate an endpoint that is usually slow - so this is a good metric to start with. But sometimes a low average value can contain many successful fast requests hiding a few much slower requests.

    Application insights plots out the distribution of response time values  allowing potential issues to be spotted.


    ​​​​Figure: this distribution graph shows that under an average value of 54.9ms, 99% of requests were under 23ms but there were a few requests taking up to 32 seconds!

  11. Errors – Do you know the daily process to improve the health of your web application?

    ​​​​​​​Application Insights can provide an overwhelming amount of errors in your web application, so use just-in-time bug processing to handle them.

    The goal is to each morning check your web application's dashboard and find zero errors. However, what happens if there are multiple errors? Don't panic, follow this process to improve your application's health.

    Figure: Every morning developers check Application Insights for errors​

    Once you have found an exception you can drill down into it to discover more context around what was happening. You can find out the user's browser details, what page they tried to access, as well as the stack trace (Tip: make sure you follow the rule on How to set up Application Insights to enhance the stack trace).

    Figure: Drilling down into an exception to discover more.

    It's easy to be overwhelmed by all these issues, so don't create a bug for each issue or even the top 5 issues. Simply create one bug for the most critical issue. Reproduce, fix and close the bug then you can move onto the next one and repeat. This is just-in-time bug processing and will move your application towards better health one step at a time.

    Figure: Bad example - creating all the bugs
    Figure: Good example - create the first bug (unfortunately bug has to be created manually)
  12. Do you know how to handle errors in Raygun?

    Your team should always be ensuring that the health of the application is continually improving.

    The best way to do that is to check the exceptions that are being logged in the production application. Every morning, fix the most serious bug logged over the last week. After it is fixed then email yesterday's application health to the Product Owner. 

    There's traditional error logging software like Log4Net or Elmah, but they just give you a wall of errors that are duplicated and don't give you the ability to mark anything as complete. You'll need to manually clear out the errors and move them into your task tracking system (TFS/

    This is where RayGun or Application Insights comes into the picture. RayGun gives you the following features:

    • Grouping exceptions
    • Ignoring/filtering exceptions
    • Triaging exceptions (mark them as resolved)
    • Integrations to TFS/ to create a Bug, Slack
    • Tracking the exceptions to a deployment
    • See which errors are occurring the most often
    Figure: Bad Example - Elmah gives you a wall of exceptions and no way to flag exceptions as completed

    Hi Adam,
    Please find below the Raygun Health Check for TimePro:

    Figure: Raygun health check for TimePro in the past 7 days 


    Figure: Resolved issues in the past 7 days​


    Figure: The next issue to be worked on​

    <This email is from >​

    Figure: Email with Raygun application health report​​​ 

  13. Do you do exploratory testing?

    ​Use Microsoft's Exploratory Testing - Test & Feedback extension - to perform exploratory tests on web apps directly from the browser.

    Capture screenshots, annotate them and submit bugs as you explore your web app - all directly from Chrome (or Firefox) browser. Test on any platform (Windows, Mac or Linux), on different devices. No need for predefined test cases or test steps. Track your bugs in the cloud with Visual Studio Team Services (VSTS).

    Ravi walks Adam through the exploratory testing extension - You can also watch on SSW TV
    Ravi Shanker and Adam Cogan talk about the test improvements in Visual Studio Team Services and the Chrome Test & Feedback​ extension  - You can also watch on SSW TV
    Official video from Microsoft Visual Studio channel

    1. Go to Visual Studio Marketplace and click install.
      Figure: Microsoft Test & Feedback ​(was Exploratory Testing) extension 
    2. Click "Add to Chrome" to add the extension to the browser on your computer.
      Figure: Chrome Web Store page for Test & Feedback extension
    3. Go to Chrome.
    4. Start a session by clicking on the Chrome extension and then click start a session.
      Figure: Chrome extension icon
      Figure: Test & Feedback start session button
    5. Upload the screenshot to a PBI.

      Figure: PBI in Visual Studio Team Services (VSTS) showing the screenshot

    ​Related Links

  14. Do you use the best Code Analysis tools?

    Whenever you are writing code, you should always make sure it conforms to your team's standards. If everyone is following the same set of rules; someone else’s code will look more familiar and more like your code - ultimately easier to work with.

    No matter how good a coder you are, you will always miss things from time to time, so it's a really good idea to have a tool that automatically scans your code and reports on what you need to change in order to improve it.

    Visual Studio has a great Code Analysis tool to help you look for problems in your code. Combine this with Jetbrains' ReSharper and your code will be smell free.

    The levels of protection are:

    Figure: You wouldn't play cricket without protective gear and you shouldn't code without protective tools

    Level 1

    Get ReSharper to green on each file you touch. You want the files you work on to be left better than when you started. See Do you follow the boyscout rule?

    Tip: You can run through a file and tidy it very quickly if you know two great keyboard shortcuts:

    • Alt + [Page Down/Page Up] : Next/Previous Resharper Error / Warning
    • Alt + Enter: Smart refactoring suggestions
    Image 01
    Figure: ReSharper will show Orange when it detects that there is code that could be improved
    Figure: ReSharper will show green when all code is tidy

    Level 2

    Is to use Code Auditor.

    Figure: Code Auditor shows a lot of warnings in this test project

    Note: Document any rules you've turned off.

    Level 3

    Is to use Link Auditor.

    Note: Document any rules you've turned off.

    Level 4

    Is to use StyleCop to check that your code has consistent style and formatting.

    Figure: StyleCop shows a lot of warnings in this test project

    Level 5

    Run Code Analysis (was FxCop) with the default settings or ReSharper with Code Analysis turned on

    Figure: Run Code Analysis in Visual Studio
    Code Analysis
    Figure: The Code Analysis results indicate there are 17 items that need fixing

    Level 6

    Ratchet up your Code Analysis Rules until you get to 'Microsoft All Rules'

    Figure: Start with the Minimum Recommended Rules, and then ratched up.

    Level 7

    Is to document any rules you've turned off.

    All of these rules allow you to disable rules that you're not concerned about.  There's nothing wrong with disabling rules you don't want checked, but you should make it clear to developers why those rules were removed.

    Create a GlobalSuppressions.cs file in your project with the rules that have been turned off and why.

    Figure: The suppressions file tells Code Analysis which rules it should disable for specific code blocks

    More Information: Do you make instructions at the beginning of a project and improve them gradually? and


    Level 8

    The gold standard is to use SonarQube, which gives you the code analysis that the previous levels give you as wells as the ability to analyze technical debt and to see which code changes had the most impact to technical debt
    Figure:  SonarQube workflow with Visual Studio and Azure DevOps​
    Figure: SonarQube gives you the changes in code analysis results between each check-in

  15. Do you look for Code Coverage?

    Code Coverage shows how much of your code is covered by tests and can be a useful tool for showing how effective your unit testing strategy is.  However, it should be looked at with caution.​​

    • You should focus on *quality* not *quantity* of tests.
    • You should write tests for fragile code first and not waste time testing trivial methods
    • Remember the 80-20 rule - a very high-test coverage is a noble goal but there are diminishing returns.
    • If you're modifying code, write the test first, then change the code, then run the test to make sure it passes (AKA red-green-refactor).
    • You should run your tests regularly (see Do you follow a Test Driven Process). Ideally, they'll be part of your build (see Do you know the minimum builds to create on any branch)
    Figure: Code Coverage metrics in Visual Studio. This solution has a very high code coverage percentage (around 80% on average)

    ​Tip: Do you use Live Unit Testing to see code coverage?

  16. Do you use Slack as part of your DevOps?

    Figure: See how Slack can be setup to improve your Devops

    With all these different tools being used to collect information in your application, a developer will frequently need to visit many different sites to get information like:
    • Was the last build successful?
    • What version is in production?
    • What errors are being triggered on the app?
    • Is the server running slow?
    • What is James working on?
    This is where a tool like Slack comes in handy. It can help your team aggregate this information from many separate sources into one dedicated channel for your project. The other benefits also include a new team member instantly having access to the full history of the channel as well so no conversations are lost.

    ​​At SSW we integrate Slack with:

    • Octopus Deploy
    • TeamCity
    • Visual Studio

    Even better, you can create bots in slack to manage things like deployments and updating release notes.

    Good example - One centralized location for team chat, deployment issues, exceptions and TFS changes
  17. Do you create a Continuous Integration Build for the Solution?

    ​​(Before you configure continuous deployment) You need to ensure that the code that you have on the server compiles. A successful CI build without deployment lets you know the solution will compile.

    Figure: The Build definition name should include the project name. The reason for this is that builds for all solutions are placed in the same folder, and including the build name makes the Build Drop folder organised
    Figure: On the Trigger tab choose Continuous Integration. This ensures that each check-in results in a build
    Figure: On the Workspace tab you need to include all source control folders that are required for the build
    Figure: Enter the path to your Drop Folder (where you drop your builds)
    Figure: Choose the Default Build template and enter the DeployOnBuild argument to the MSBuild Arguments parameter of the build template
    Figure: Queue a build, to ensure our CI build is working correctly
    Figure: Before we setup continuous deployment it is important to get a successful basic CI build
  18. Do you know how to name documents?

    When naming documents, use kebab-case to separate words to make your files more easily discoverable.

    A file name without spaces means that the search engine doesn't know where one word ends and the other one begins. This means that searching for 'monthly' or 'report' might not find this document.


    Bad Example: File name doesn't contain any separators between words

    As far as search goes, using spaces is actually a usable option. What makes spaces less-preferable is the fact that the URL to this document will have those spaces escaped with the sequence %20. E.g. https://sharepoint/site/library/Monthly%20​Report.docx. URLs with escaped spaces are longer and less human-readable.

    Monthly Report.docx 

    Bad Example: File name uses a space to separate words

    Underscores are not valid word separators for search in SharePoint, and not recommended by others. Also, sometimes underscores are less visible to users, for example, when a hyperlink is underlined. When reading a hyperlink that is underlined, it is oft​en possible for the user to be mistaken by thinking that the URL contains spaces instead of underscores. For these reasons it is best to avoid their use in file names and titles.


    Bad Example: File name uses an underscore (snake_case) to separate words

    A hyphen is the best choice, because it is understood both by humans and all versions of SharePoint search.


    Good Example: File name uses a kebab-case​ to separate words

    Add relevant metadata where possible

    If a document library is configured with metadata fields, add as much relevant information as you can. Metadata is more highly regarded by search than the contents within documents, so by adding relevant terms to a documents metadata, you will almost certainly have a positive effect on the relevance of search results.​

    Use descriptive file names and titles

    The file name and title is regarded more highly by search than the content within documents. Also, the title or file name is what is displayed in the search results, so by making it descriptive, you are making it easier for people who perform searches to identify the purpose of your document.​

    Related Rule

  19. Do you publish simple websites directly to Windows Azure from Visual Studio Online?

    ​​​TFS and Windows Azure work wonderfully together. It only takes a minute to configure continuous deployment from Visual Studio Online ( to a Windows Azure Web Site or Cloud Service.

    This is by far the most simple method to achieve continuous deployment of your websites to Azure.

    But, if your application is more complicated, or you need to run UI tests as part of your deployment, you should be using Octopus Deploy instead according to the Do you use the best deployment tool​ rule.​

    Figure: Setting up deployment from source control is simple from within the Azure portal
    Figure: Deployment is available from a number of different source control repositories

    Suggestion to Microsoft: We hope this functionality comes to on-premise TFS and IIS configurations in the next version.

  20. Do you use a Project Portal for your team and client?

    ​When a new developer joins a project, there is often a sea of information that they need to learn right away to be productive. This includes things like: ​

    1. ​​Who the Product Owner​ is and who the Scrum Master​ is
    2. Where the backlog is
    3. Where the automated builds are
    4. Where the staging and production environments are
    5. How to set up the development environment for the project

    Make it easy for the new developer by putting all this information in a central location like the Visual Studio dashboard.

    ​​​Figure: Bad Example - Don't stick with the default dashboard, it's almost useless​​​​
    Figure: ​Good Example - This dashboard contains all the information a new team member would need to get started

    The dashboard should contain:

    1. Who the Product Owner is and who the Scrum Master is
    2. The Definition of Ready  and the Definition of Done
    3. When the daily standups​ occur and when the next sprint review is scheduled
    4. The current sprint backlog
    5. Show the current build status
    6. Show links to:
      • Staging environment
      • Production environment
      • Any other external service used by the project e.g. Octopus Deploy, Application Insights, RayGun, Elmah, Slack

    Your solution should also contain the standard _Instructions.docx to your solution file for additional details on getting the project up and running in Visual Studio.

    For particularly large and complex projects you can use an induction tool like SugarLearning to create a course for getting up to speed with the project.


  21. Do you use the best deployment tool?

    Often, deployment is either done manually or as part of the build process. But deployment is a completely different step in your lifecycle. It's important that deployment is automated, but done separately from the build process.

    There are two main reasons you should separate your deployment from your build process:

    1. You're not dependent on your servers for your build to succeed. Similarly, if you need to change deployment locations, or add or remove servers, you don't have to edit your build definition and risk breaking your build.
    2. You want to make sure you're deploying the *same* (tested) build of your software to each environment. If your deployment step is part of your build step, you may be rebuilding each time you deploy to a new environment.
    The best tool for deployments is Octopus Deploy.
    Figure: Good Example - SSW uses Octopus Deploy to deploy Sugar Learning

    Octopus Deploy allows you to package your projects in Nuget packages, publish them to the Octopus server, and deploy the package to your configured environments. Advanced users can also perform other tasks as part of a deployment like running integration and smoke tests, or notifying third-party services of a successful deployment.

    Version 2.6 of Octopus Deploy introduced the ability to create a new release and trigger a deployment when a new package is pushed to the Octopus server. Combined with Octopack, this makes continuous integration very easy from Team Foundation Server.

    What if you need to sync files manually?

    Then you should use an FTP client, which allows you to update files you have changed. FTP Sync and Beyond Compare are recommended as they compare all the files on the web server to a directory on a local machine, including date updated, file size and report which file is newer and what files will be overridden by uploading or downloading. you should only make changes on the local machine, so we can always upload files from the local machine to the web server. 

    This process allows you to keep a local copy of your live website on your machine - a great backup as a side effect. 

    Whenever you make changes on the website, as soon as they are approved they will be uploaded. You should tick the box that says "sync sub-folders", but when you click sync be careful to check any files that may be marked for a reverse sync. You should reverse the direction on these files. For most general editing tasks, changes should be uploaded as soon as they are done. Don't leave it until the end of the day. You won't be able to remember what pages you've changed. And when you upload a file, you should sync EVERY file in that directory. It's highly likely that un-synced files have been changed by someone, and forgotten to be uploaded. And make sure that deleted folders in the local server are deleted in the  remote server. 


    If you are working on some files that you do not want to sync then put a _DoNotSyncFilesInThisFolder_XX.txt file in the folder. (Replace XX with your initials.) So if you see files that are to be synced (and you don't see this file) then find out who did it and tell them to sync. The reason you have this TXT file is so that people don't keep telling the web

    NOTE: Immediately before deployment of an ASP.NET application with FTP Sync, you should ensure that the application compiles - otherwise it will not work correctly on the destination server (even though it still works on the development server).