Temporal in SQL Server 2016

Sign in to queue

Description

Today, Scott welcomes Borko Novakovic back to Data Exposed and the Channel 9 studios. Borko, a Program Manager in the SQL Server team, is in Redmond all the way from his home in Belgrade, Serbia to introduce us to the new Temporal technology being introduced in SQL Server 2016. Temporal in SQL Server is designed to simply the handling of time-varying data. It provides the ability to look at data trends, types of data changes, and the overall data evolution within your database.

Borko begins the show by explaining the key scenarios around the use of Temporal and why it will be included in Azure SQL Database and SQL Server 2016. He then leads us through how to get started with Temporal and how system-time works, then quickly jumps I into a demo to show us Temporal in action, including the new easy-to-use T-SQL syntax to implement Temporal.

As Borko said, this is as close to time traveling with data as we'll get. :)

Tag:

SQL Server

Embed

Download

Download this episode

The Discussion

  • User profile image
    Davele

    Interesting but I'm concerned that it doesn't address the very common scheduling / time period overlap style queries. 
    Example 1: Multiple Date Columns in a Row. Matching with some filter period.
    An Order Record has fields: OrderPlacedDate, ManufStartDate, ManufCompleteDate, OrderShippedDate, OrderRecievedDate, OrderPaidDate. (etc)

    Query 1: At July 1, Show Value & Count of Active orders that had not started Manuf, Were being Manufactured, Were waiting for transport, Were in Transit, Were expecting payment, Were 90 days over due. 

    Query 2: List the orders that were in Manufacture sometime in the period Mon 4th - Fri 8th May 2015

    Example 2. Comparing overlapping periods between 2 entities.
    Two Tables in a Hospital.
    PatientAdmissions: AdmissionDate, DischargeDate. Condition( Cancer | Burns | etc)
    DoctorRoster: ShiftStart, ShiftEndDateTime, Specialty(Cancer | Burns |etc)

    Query 1: Show List of Burns Patients & Burns Specialists in Hospital at noon 25 Apr 2015.
    Query 2: Show Count of Burns Patients & Count of Burns Specialists, Grouped by Shift over the past 90 days. 
    The objective is to discover if we have sufficient clinicians to provide adequate care to a patient type. 

    Am I missing something? 

  • User profile image
    sokhaty

    @Davele: If I're reading your post correctly, except for Ex1 Q2, all other cases should be expressible and work correctly in the terms of "now" (irrespective of the temporal tables).

    For query Ex 1 Q2 temporal feature would be a pre-requisite to get the answer.

    Temporal would be really helpful if in the Ex Q1 you were asking, what is the difference between what was known to the system "as of", say, a month back about state of the manufacturing pipeline on July 1st, versus what is known now about the state of the pipeline on July 1st.

    Temporal support is useful in cases when one has to test a forecasting model to see what kind of a decision the model would make, if it had only knowledge about the world as it existed then, vs. decision that would be made know with all the updated and revised facts.

  • User profile image
    bonova

    Thank you for your comments and questions.

    @Davele: Temporal in SQL Server 2016 allows you to automatically maintain one time dimension – the transaction time when data change effectively landed in your database. If time in your application flows "forward only" (i.e. if business events expressed by other dates are timely ordered and always followed by database transaction), then SQL 2016 temporal will give you correct answers to questions you described with your query examples.

    In that case, you would need to manage all business dates from your application while temporal will give you state of database tables 'AS OF' any point in time. Your Ex1 Q1 might look like this

    SELECT * FROM ORDERS FOR SYSTEM_TIME AS OF '2014.07.01' AND

    (ManufStartDate >  '2014.07.01' OR (ManufStartDate < '2014.07.01' AND ManufCompleteDate > '2014.07.01'...)

    Ex1 Q2 can be written in this form:

    SELECT * FROM ORDERS FOR SYSTEM_TIME CONTAINED_IN ('2015.05.04, '2015.05.08')

    AND (<list of other date conditions>)

    However, if your app frequently changes business dates with values in the past (i.e. if you enter today ManufComplete date with a value which is 3 days ago), this model won't be ideal for you.

    Anyway, even in that case you can use SQL 2016 temporal for auditing purposes to detect that past values were changed afterwards.

    As for Ex2: when you have overlapping periods in some cases view between temporal tables is the most efficient solution.

    Ex2 Q1: you need to create a view and join PatientsAdmissions and DoctorRoster based on Condition and Specialty and to apply 'AS OF' to the view (both tables need to be temporal).

    In more complex cases (such as Ex2 Q2), you need apply FOR SYSTEM_TIME BETWEEN AND on both tables for last 90 days join them on Condition and Specialty but also to add  overlapping condition between two periods in the JOIN PART:

    ...AND PatientsAdmissions.AdmissionDate BETWEEN DoctorRoster.ShiftStartDate AND DoctorRoster.ShiftEndDate

    I hope this helps.

    Borko

  • User profile image
    ITsmart

    Great video. Thank you for that!

    I like the new Temporal Table feature, but I do have two questions:

    1. Will it be possible to exclude some columns from keeping history? Then it will be a valid option to replace the regular SCD2 ETL steps.

    2. How did you create the demo history table with "old" dates? If I create a table with history turned on, it starts recording changes now. I would like to be able to start with my "current history".

    Thanks, Edgar

     

  • User profile image
    bonova

    Thank you Edgar - we're glad to see that you find temporal useful for your scenarios. Here are answers:

    1) Currently we do not support filtering out changes that occur on columns one is not interested to track in DW schema (I guess that was the question). We are aware that some people need this capability, but modifying ETL logic to exclude these updates is also viable option...

    2) Yes, you can start with your "current history" - we were really flexible when it comes to supported ALTER TABLE paths so you can migrate your solution to built-in support. Imagine you already have table DimCustomerHistory that contains both current and historical data (the most general case). You can create another table (non-temporal) and move only actual data to it:

    CREATE TABLE dbo.DimCustomer (...);

    INSERT INTO dbo.DimCustomer SELECT * FROM dbo.DimCustomerHistory

    WHERE ValidTo = '9999.12.31';

    --remove actual records from history

    DELETE dbo.DimCustomerHistory WHERE ValidTo = '9999.12.31';

    Now, you can use ALTER TABLE to convert newly created table to temporal and bind it with your history that contains only old data:

    ALTER TABLE dbo.DimCustomer 

    SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.DimCustomerHistory));

    From that point on all inserts land into dbo.DimCustomer, while new updates and deletes will land into history table.

     

  • User profile image
    sastrysagi

    excellent feature to capture complete audit.

  • User profile image
    Alex

    Great feature ! I wrote an article in french about this feature :
    http://frenchdba.com/sql-server-2016-temporal-tables/

    What is the method to delete rows in DepartmentHistory ?

  • User profile image
    bonova

    Thanks for feedback and writing article on temporal!

    If your history table is partitioned you can use PARTITION SWITCH to remove partitions that you don't need.

    This is operation that does not require SYSTEM_VERSIONING to be temporarily disabled:

    /*Create schema-aligned staging table historyArchive*/

    ALTER TABLE DepartmentHistory SWITCH PARTITION 1 to historyArchive;

    If history table is not partitioned than you need to turn SYSTEM_VERSIONINNG to OFF:

    BEGIN TRAN

           ALTER TABLE Department  SET (SYSTEM_VERSIONING = OFF); 

           /* delete history data that is not required */

           DELETE FROM DepartmentHistory

           WHERE SysEndTime < '2014.01.01' 

           /* Re-establish versioning again

             DBA may choose to ignore data consistency check in order to make it fast

           */

           ALTER TABLE Department

           SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE=DepartmentHistory, DATA_CONSISTENCY_CHECK = OFF)); 

    COMMIT

     

  • User profile image
    paSQuaLe

    Awesome features, but what about performance on tables with milion of rows?

  • User profile image
    bonova

    @ Pasquale: performance really depends on the workload pattern, not necessarily on table size. Additionally, DMLs and querying performance should be separately discussed.

    If a predominant DML operation in your workload is INSERT, overall DML performance will be comparable with non-temporal tables since adding new rows introduce negligible overhead (less than 1%).

    Updates and deletes introduce some performance overhead because previous row versions are moved into history table, but that overhead is less than if you were to implement the same logic inside the app or using the triggers. You can control how large that overhead will be by configuring storage options and indexes for history table.

    In general, temporal gives you all flexibility to optimize the system for your target scenarios since current and history tables can be managed and configured independently. You can do that later on even if you let system to create history table for you.

    For large temporal tables we strongly recommend that you use partitioning based on period column (SysEndTime) because not only that you will speed up DMLs and temporal querying but you also will be able to support data aging very efficiently.

    When it comes to querying, people usually distinguish between "warm" and "cold" history: the first one relatively frequently queried and later one rarely or never queried. Having said that, data aging strategy is the most important to control system performance with temporal tables.

    Use partitioning to place your warm history on faster disks and get rid of the old data that you do not need anymore (PARTITION SWITCH OUT).

    Consider using clustered columnstore index for the history table if you run analytical queries on historical data – you'll get very good compression and much better performance.

    In general, design indexes on history table carefully to get a good balance between DML and querying operations.

  • User profile image
    Aaron Thomason

    Is it possible to get a history of just the changes (new rows, updated rows, and deleted rows) for two dates? Is there a syntax for getting this data. You showed that in your demo and I am curious to see the SQL for this.

  • User profile image
    Aaron Thomason

    Ah, I figured it out. You can union the old table and new table, and then do a case statement to figure out whether it is one of 5 states (deleted, inserted, old version, new version, or unchanged).

    For a table called Test with id as the pkey, name varchar(20), and period start_date to end_date, you can use this query

    declare @firstTime datetime, @secondTime datetime
    set @firstTime = '2015-06-08 21:11:00'
    set @secondTime = getdate()

    select *
    , case when year(end_date) < 9999
    and not exists(select 1
    from dbo.Test
    for SYSTEM_TIME AS OF @secondTime t
    where all_rows.id = t.id)
    then 'deleted'
    when year(end_date) < 9999
    then 'old version'
    when year(end_date) = 9999
    and not exists(select 1
    from dbo.Test
    for SYSTEM_TIME AS OF @firstTime t
    where all_rows.id = t.id)
    then 'inserted'
    when year(end_date) = 9999
    and exists(select 1 from dbo.Test
    for SYSTEM_TIME AS OF @firstTime t
    where all_rows.id = t.id)
    then 'new version'
    else 'unchanged'
    end as change_type
    from
    -- union of old rows and new rows with alias all_rows
    (SELECT [id]
    ,[name]
    ,[start_date]
    ,[end_date]
    FROM [dbo].[Test]
    for SYSTEM_TIME AS OF @firstTime
    union
    SELECT [id]
    ,[name]
    ,[start_date]
    ,[end_date]
    FROM [dbo].[Test]
    for SYSTEM_TIME AS OF @secondTime t) all_rows

  • User profile image
    Aaron Thomason

    Tweaked it a bit. Borko, let me know if your query is different.

    declare @firstTime datetime, @secondTime datetime
    set @firstTime = '2015-06-08 21:11:45'
    set @secondTime = '2015-06-09 16:11:38'

    select *
    , case when year(end_date) < 9999
    and not exists(select 1
    from dbo.Test
    for SYSTEM_TIME AS OF @secondTime t
    where all_rows.id = t.id)
    then 'deleted'
    when not exists(select 1 from dbo.Test
    for SYSTEM_TIME AS OF @firstTime t
    where all_rows.id = t.id)
    then 'inserted'
    when start_date > @firstTime and end_date > @secondTime
    then 'new version'
    when start_date < @firstTime and end_date < @secondTime
    then 'old version'
    else 'no change'
    end as change_type
    from (SELECT [id],[name],[start_date],[end_date]
    FROM [dbo].[Test]
    for SYSTEM_TIME AS OF @firstTime
    union
    SELECT [id],[name],[start_date],[end_date]
    FROM [dbo].[Test]
    for SYSTEM_TIME AS OF @secondTime t) all_rows

  • User profile image
    bonova

    @Aaron: this is valid query and there are a few possible variations. Cne can use for example common table expressions (CTEs) instead of sub-queries to make code more readable.

    Here is another query that you can use if you are interested only in last version for the updated records (this is common case in DW scenarios):

    declare @firstTime datetime, @secondTime datetime
    set @firstTime = '2013-01-05 11:30:00'
    set @secondTime = '2015-06-09 16:11:38'
    ;with TestAsOfDT1 AS
    ( SELECT * FROM Test FOR SYSTEM_TIME AS OF @firstTime )
    , TestAsOfDT2 AS
    ( SELECT * FROM Test FOR SYSTEM_TIME AS OF @secondTime )
    SELECT CASE WHEN DT1.ID IS NULL AND DT2.ID IS NOT NULL THEN 'INSERTED'
    WHEN DT1.ID IS NOT NULL AND DT2.ID IS NULL THEN 'DELETED'
    WHEN CHECKSUM (DT1.Name) <> CHECKSUM (DT2.Name) THEN 'UPDATED'
    ELSE 'NO CHANGE' END AS CHANGE_TYPE
    , * -- Project all columns
    FROM TestAsOfDT1 AS DT1 FULL JOIN TestAsOfDT2 AS DT2
    ON DT1.ID = DT2.ID

  • User profile image
    Alexander Sharovarov

    Temporal tables is a great solution. How it described currently will work great for auditing and data warehouse projects with infrequent loads (once a day, once a week).
    But there are other data warehouse use cases which doesn't fit well:

    C1. Global Data warehouses (Multiple loads per day into data warehouse for different regions (time-zones) with users all over the world).
    C2. Micro-batches running throughout the day.
    C3. Continuous loads (streaming ETL).

    R1. Common requirement is that users care about single (latest) version of data per day.
    R2. Reporting solution should be available 24x7. So minimum downtime is required.

    Here are the issues:
    I1. Data loads and reporting happens at the same time so it is not possible to do maintenance on historic table because it will break queries using temporal clause. So it will cause downtime.
    I2. Temporal table create unnecessary versions, which has to be cleaned manually. It is extra development work. It requires downtime.
    I3. System Dates are not always matching business dates. In order to match it you need update values in historic table. And it requires downtime.

    I wonder what you can suggest to overcome these issues for global systems and systems requiring minimum downtime?

    From my point of view it would be great to have the following features:
    F1. Have a mode to keep single version for the day (in that case start and end dates should be date type instead of datetime2). So it would create version only in case it hasn’t been changed today.
    F2. Have ability to make changes to history table when versioning is on (for users with required permissions):
    F2.1. Maybe introduce special commands to change the system start/end date attributes
    F2.2. Delete version (with automatic healing on the gap in dates)
    F2.3. Introduce as of clause into update so you can fix historic data w/o downtime

  • User profile image
    bonova

    @Alexander: thank you for comments. We are aware that temporal at his point cannot be applied to all DW solutions, but please bear in mind that this feature is not designed only for DW. I also would say that not all issues you mention are equally limiting.

    I agree that if business date is not equal to system date then the only solution today is to update history. If you have requirement to support 24x7 reporting than that's option is not viable.

    It's true that temporal may generate multiple versions but only in the case when updates really occur on the row. Even in that case you are able to query the latest state of the data (at the end of the day, for example) by specifying correct parameter in FOR SYSTEM_TIME AS OF query.  AS OF query returns correct results, regardless of how many updates occur (0, 1 or multiple).

    To make a long story short - system-versioned temporal tables have some limitations when users need more control over the period of validity and versions but at the same time that property makes them a very good candidate for auditing and automated versioning scenarios.

    Therefore, instead of extending system-versioned tables to support those scenarios, we may choose to introduce new type of temporal to allow users to edit period columns and history through regularly supported update operation, without downtime. 

    However, timelines and product version for that feature are not yet determined although we are fully aware that it would be useful to many customers.

  • User profile image
    asharovarov

    @bonova: Thank you for your reply. I understand that there are other use cases for temporal queries. At the same time slowly changing dimension maintenance was targeted in the video as well ;)

    It would be great to have another version of temporal tables which would support scenarios I described. I hope it will be prioritized soon ;)

  • User profile image
    Yoshihro​Kawabata

    Can I strech History Table to Azure SQL Database by the Streach Database function ?

  • User profile image
    cekberg

    This is a great new feature and I can see it being extremely useful in a lot of scenarios. 

    One question I have though is, does this currently work with elastic scale or sharding strategies in general?

  • User profile image
    bonova

    @YoshihroKawabata: Yes, you can stretch history table to Azure SQL Database. If you query history data rarely and if the query performance in that scenario is not critical that might be very effective configuration in terms of maintenance and total cost.

    @cekberg: Temporal is not yet available in Azure SQL Database. We will work on integration with sharding framework before it gets supported in Azure.

    One scenario won't be supported though is sharding based on period columns because their values are exclusively determined by the destination database. However, this is a reasonable limitation given that sharding by temporal columns shouldn't represent primary use case. 

    Using temporal for a table shared by some other column will transparently add history to all table shards enabling time-travel and data audit scenarios. 

  • User profile image
    Alex Volok

    Hi Borko, all,

    Thanks for this feature, I believe, SQL Server community is very enthusiastic about it.

    There was question about performance on the big tables.
    I recently blogged about performance of querying of multi million row datasets:
    http://www.alexvolok.com/2015/06/sql-2016-temporal-tables-indexing-part-iii/

    Columnstore based tables are perfect in a combination with temporal tables.

    However, I stuck into other issue, which I believe would be great to resolve somehow:

    Such kind of query will generate new versions in a history table:

    UPDATE TestUpdates
    SET ColumnInt1 = ColumnInt1

    Can temporal tables got some kind of directive or option, which would set system versioning to capture only REAL changes of the actual value, so query I posted above will not generate new version of every row of current table

    Thanks!

  • User profile image
    bonova

     

    Hi Alex, thank you for blogging about temporal - I really like the topics you have covered.

    We are aware that in some scenarios (especially in data warehousing or analytics) columnstore is a great choice for temporal. Thank you for sharing your insights on performance with different storage options.

    As for the issue you reported: yes, we are aware that some people may require that temporal table filter out "false updates" automatically. However, this is something we likely won't be able to address in this release. 

    If we were to implement this functionality, we would add an additional configuration parameter of SYSTEM_VERSIONING clause (KEEP_FALSE_UPDATES = ON/OFF, or similar). Is this what you would expect?

    Anyway, you should know that additional filtering will slow down DMLs and performance decrease might be significant. How much of extra cost you would be willing to pay to have this capability?

  • User profile image
    AlexVolok

    Hi Borko, thank you for response,

    As for the issue you reported: yes, we are aware that some people may require that temporal table filter out "false updates" automatically. However, this is something we likely won't be able to address in this release. 

    If we were to implement this functionality, we would add an additional configuration parameter of SYSTEM_VERSIONING clause (KEEP_FALSE_UPDATES = ON/OFF, or similar). Is this what you would expect?

    You precisely described the proposal - to have a directive which allow to keep actual row changes only. 

    Anyway, you should know that additional filtering will slow down DMLs and performance decrease might be significant. How much of extra cost you would be willing to pay to have this capability?

    Your point is absolutely clear. Perhaps, currently system versioning works on top of OUTPUT clause of DML operations, which makes overall implementation straightforward and lightweight.

    Additional filtering will require extra work to compare INSERTED and DELETED datasets. However, it can be done using hashes etc.

    IMHO, it can be very valuable by end users, since some scenarios, especially in blackboxed OLTP apps, can generate significant amount of "non-real" updates.

    Thanks again for the initiative to bring temporal functionality in upcoming release!

     

    Update 1: 
    Connect Feedback item has been created with a proposal: https://connect.microsoft.com/SQLServer/feedback/details/1489510/sql-server-2016-temporal-tables-prevent-adding-a-versions-of-non-real-row-updates

    If someone else think that proposed change can be useful, please vote-up that item.

     

    Update 2:

    If my assumption is correct and this feature built on top of OUTPUT clause, then historical table loaded this way:

    INSERT Table1_History
    SELECT * FROM DELETED 

    Therefore, the easiest way to implement "KEEP_FALSE_UPDATES = OFF" is by using following kind of construction:

    INSERT Table1_History
    SELECT DELETED.*
    EXCEPT
    SELECT INSERTED.*

    When it come to overhead, I believe, in this case,  it is unavoidable evil, which required to be documented, however, the same situation is with DATA_CONSISTENCY_CHECK directive.

  • User profile image
    Sukhi

    Indeed, Awesome feature!!!. However i have couple of questions.. Please try to share your view

    1. Any audit feature needs to be enable to enable this temporal feature?
    2. Performance impact on creating temporal history on table of 100G in size?
    3. How SQL server is managing retention of it?
    4. How much extra space we require to enable temporal feature on DB ? Any best practices around it :)

    Regards
    Sukhi

  • User profile image
    bonova

    @Sukhi, thank you for positive feedback and additional questions! Here are my answers:

    1. You can use temporal as a standalone feature, i.e. there is no need to enable audit feature to start using temporal. It's is sufficient to create temporal table or to modify existing, non-temporal table to become temporal by adding SYSTEM_TIME period and specifying history table.

    However, temporal can work together with SQL Audit in two scenarios:

    1. When you want to track who is changing temporal configuration (SYSTEM_VERSIONING ON/OFF) you can use SQL Audit to track ALTER TABLE event. We have extended DDL events to emit temporal related information
    2. When you want to correlate SQL Audit and temporal to get information how data looked like before and aftercertain user operation.

    Did you have specific auditing requirements in mind with regards to temporal?

    2. Creating history table does not have any performance impact because it is initially empty.

    This is what I have answered to @ Pasquale on a similar question:

    "Performance really depends on the workload pattern, not necessarily on table size. If a predominant DML operation in your workload is INSERT, overall DML performance will be comparable with non-temporal tables since adding new rows introduce negligible overhead (less than 1%).

    Updates and deletes introduce some performance overhead because previous row versions are moved into history table, but that overhead is less than if you were to implement the same logic inside the app or using the triggers. You can control how large that overhead will be by configuring storage options and indexes for history table."

    3. At this point, retention is user's responsibility. We strongly advise using partitioning to deal with this problem. You can schedule your own job that will either SWITCH or DROP oldest partition or DELETE data from history (SYSTEM_VERSIONING needs to be set to OFF). 

    We will definitely improve retention in upcoming feature updates and make it simpler for end user- please stay tuned.

    4. Extra space again depends on your workload and retention requirement (for how long you need to keep history of data within your database). I suggest you to turn on temporal for a couple of days and get same real data points. One additional aspect is very important - our recommendation is to configure history as a table with clustered columnstore index. If that's not suitable for you than apply page compression to your row store history table (SQL Server applies compression to history table if you let it create one for you). Both options will give you huge storage savings.

    Thanks,

    Borko Novakovic (bonova@MSFT)

  • User profile image
    wsantosf

    @Alexander Sharovarov: I also need to handle business dates, and maybe this would work: whenever your business date is flipped forward, write a record with the system date. Then use this table to find the system date that defines the cutoff for your system date. Similar to a date table on datawarehouse where you define fiscal periods. On a separate note, to make auditing easier, just add a 'last updated by' column to the table, so it gets loaded to the history table automatically.

  • User profile image
    bonova

    @wsantosf: I agree with you that for a subset of DW scenarios it will be possible to overcome limitations of system-time temporal, as you described. I encourage you to try this out. 

    As for "making audit easier by adding 'last update by' ": in my opinion that naturally translates into new type of columns we might add- something like GENERATED ALWAYS AS USER ID /USER NAME?

    Would that syntax be aligned with your expectations?

    As for values that are auto-populated, the best candidates we were considering are SUSER_SID () and SUSER_SNAME ().

    Is that what you would expect populated for 'Last Updated By'?

    Thanks,

    Borko Novakovic (bonova@MSFT)

     

     

     

  • User profile image
    Rohit

    Good to see MS Sql server having a not so common feature on temporal.

    I want to know, lets say we start using Sql 2014 and want to use Temporal when we upgrade to 2016. Is there a way to define already existing history table as part of migration from Sql 2014 to Sql 2016. Could you please tell me what do we need consider to be able to migrate in future.

    Regards
    Rohit

  • User profile image
    Drazen Sumic

    Hi Rohit,

    Thank you for your interest! We had the migration scenario (from an existing temporal implementation to in-DB implementation) in mind when we were designing this feature.

    SQL Server is using a separate table as history table, and yes you can provide an existing history table. There are some basic conditions that would need to be met, for example: the schema of the history table in terms of number of columns, their types and nullability would need to match the actual (current) data table. The system will do consistency checks to make sure this is the case, when you turn the temporal feature (system versioning) on.

    Overall, there's a good level of flexibility when it comes to managing the history table to best suit your use cases. For example, on a history table you can, independently from the actual table, setup compression, indexes and partitioning, to name a few options.

    If you're interested in more details, feel free to send an email to drasumic@microsoft.com.

    Regards,
    Drazen

  • User profile image
    Geert

    @bonova: Maybe I am wrong, but a 'last update by' column would be insufficient because then you would either miss who inserted the first version of the record or (depending on the implementation) who deleted the last version. In my opnion you need 2 columns: a system_startuser and a system_enduser, equivalent to system_startdate and system_enddate. We implemented this in our database with AFTER triggers but this not possible when SYSTEM_VERSIONING is ON as you cannot update the history table. Kind regards, Geert.

  • User profile image
    bonova

    @Geert: Thank you for your feedback, it perfectly makes sense!

    This is exactly how we were thinking about extending temporal with auditing columns - i.e. to keep semantic that is already there for period columns. So one could add one or two columns representing a user that "opened" or "closed" the row, so all type of auditing analysis will be supported.

    However, this functionality is under consideration without committed timelines - we wanted to use this thread as an opportunity to learn more about user scenarios and critical requirements.

    Thank you again,

    Borko (MSFT)

  • User profile image
    Johannes

    Seems great, when will this be available in Azure Sql Data warehouse?

  • User profile image
    bonova

    Hi Johannes,

    Thank you for your question. We are working now on enabling temporal in Azure SQL Database. 

    There are no final plans yet for making temporal available in SQL Data Warehouse though.

    Can you please share your main scenarios for using temporal in SQL DW? I could definitely imagine simplifying maintenance or  slowly changing dimensions. Did you have any other scenario in mind - I would love to learn if there are specific requirements for SQL DW that we should be aware of.

    Thank you,

    Borko Novakovic (MSFT) 

  • User profile image
    Trovaricon

    @bonova: I have an use case where multiple versioned tables have parent-child relationship with CASCADE DELETE. If I delete record from the parent table it gets moved to the history table (as expected) and child records linking to this parent are deleted from the child's current table to comply with the FK constrain & CASCADE DELETE. Unfortunately child table records do not appear in child's history table (despite it being versioned) but are deleted for good (SQL2k16 Express CTP4). Solution I came up with was to use INSER/UPDATE/DELETE triggers instead of the foreign keys to enforce the referential integrity (nasty & slow).

    Is this the expected behavior of SQL Server's temporal table implementation (temporal db theory books have a different opinion) or something that was overlooked (or not yet implemented) in the current technical preview of the SQL server?

    Thank you.

  • User profile image
    Johannes

    Hi Borko and thanks for the quick reply. Slowly changing dimensions in a structured way was the main pull for me. The second reason is less concrete at the moment but I'll try: With ADW we first import our raw data into ADW and Then transform it to well defined dimensions etc, this is different from how we used to work when the data would be transformed before arriving at the warehouse. It's ELT instead of ETL basically. One of the gains of this is centralizing this logic in the warehouse. We are a music company which deals with many different sources from many different vendors and they are all subject to changing their schemas at their discretion. I Think temporal tables would help in these situations and make developing and maintaining these transform jobs easier and more fail proof.

  • User profile image
    bonova

    @Trovaricon: this is not expected / correct behaviour. Currently, temporal tables do not support CASCADE DELETE scenario and I agree that this scenario needs to be blocked until correct implementation becomes available. I understand that alternative ways to impose integrity (triggers, app logic) are more difficult to maintain and less efficient. 

  • User profile image
    Rob de Vos

    Is this feature enabled in all licenses (Enterprise / standard)
    Is this feature linked to the recovery model (Full / Simple)

  • User profile image
    bonova

    @Rob de Vos:We haven't closed on licensing model yet. Recommendation of the product team is to make temporal available in Standard edition, but decision is not yet official.

    Feature is not linked to recovery model - it's actually orthogonal to any recovery model as it doesn't depend on database log.  

    Thank you,

    Borko Novakovic (MSFT)

  • User profile image
    aidar

    @bonova: Do you plan to include the support of user-defined validity period of a row (application time period tables)? Would be the killer feature for our case where we have to set validity dates to the future.

  • User profile image
    Dipen Lama

    Does the Temporal Model works with ORM tools nHibernate/Entity Framework.
    All Entities will be queried via Linq with complex association, inheritance.

  • User profile image
    Amith211

    @bonova

    Are there any plans to allow standard attributes to be marked as hidden? It would be useful for  including auditing attributes, such as ModifiedBy or ModifedReson.

    Cheers,

    Amith

  • User profile image
    bonova

    Hi @Aidar - we consider to support application time temporal tables in the future but I do not have any timelines to share at this point.

    @Amith - I have similar answer to your question for HIDDEN attribute applied to standard table columns.

     

    Thank you,

    Borko Novakovic (MSFT)

  • User profile image
    bonova

    Temporal Tables are available on Azure SQL Database!

    https://azure.microsoft.com/en-us/updates/preview-temporal-tables-in-azure-sql-database/

    Try them out today and let us know your feedback!

    Thanks,

    Borko Novakovic (MSFT)

  • User profile image
    diakonia

    The lack of support for CASCADE DELETE is preventing us from using this feature. When will CASCADE DELETE be supported in Temporal Tables?

    https://connect.microsoft.com/SQLServer/Feedback/Details/2258008

  • User profile image
    shradha

    This new concept of Temporal Table is really very interesting and helpful.
    Although I am looking for some specific feature, which I am not able to find under temporal. It would be great if someone can help me with my doubt, which would help us in deciding to buy 2016(as and when available).

    We have a table (Test) containing 15 columns. We daily insert data (approx. 25000 daily) in that table. We have decided 4 columns (A,B,C,D) with 1 ID column, on basis of which we will conclude whether to insert record or to update it.
    Let suppose, my table looks like this:

    ID A B C D Day
    1 A1 B1 C1 D2 Day1
    1 A1 B1 C1 D2 Day2
    1 A2 B1 C1 D1 Day3


    Scenario 1: As you can see, data on day 1 and 2 is same. In this case, we do not want any row to be inserted in my table.
    Scenario 2: On Day 3, data for column A changed. In this case we want to update this row in table and maintain old version in historical table.

    We want to check the validity of data on basis of (A, B, C, D) columns for specific ID. If value for any of these column change, I want to update record in Temporal table and move old to historical table. If there is no change in these columns, I do not want to perform any of the operation. The purpose of doing this, is to avoid redundancy. We do not want to insert same record daily, which is just increasing the size of our database.

    Currently we are achieving this task by a Stored Procedure, where we match Hash values of old data with current data and on the basis of output, insert or update operation is performed.
    Is there any way I can do this automatically through Temporal tables?

  • User profile image
    Caroline21

    @diakonia
    @shradha


    Check out the following approach to handling data change in time. Seems that all the challenges for handling relational issues in time have been addressed and can neatly left under the covers. The developers just work in current view, the magic under the covers handles temporal issues...
    https://www.youtube.com/watch?v=V1EcsuJxUno

  • User profile image
    Chloe

    Hello,

    I have a question about temporary tables. Is it possible to choose which columns to set up the historisation? Or is it necessary to record all the changes for each column.

    Thank you

  • User profile image
    BolaSaheed

    This article has been so helpful for example in the area of auditing and as well as to avoid the overhead of using trigger to implement the same scenario.

     

    Thank you

  • User profile image
    Chris

    @bonova Is there any update on a 'Last updated by' type of column?

Add Your 2 Cents