As database professionals, we typically work in a field of exact science. For example, a common practice in business intelligence (BI) solutions is creating duplicate copies of data sets, then comparing the results from the different sources to make sure they're the same. If you extract five years' worth of data from an application's database and put it into a data mart, the results in the data mart must be the same as the results in the application's database, even if the table structures were changed and older records were archived. You might build a cube or semantic model and again check to make sure the results are exactly the same as the source system. If the numbers don't add up, the results are rejected because you know that something is wrong and must be corrected. I have to confess that not getting a conclusive result when working on a tough data problem sometimes keeps me up at night.