11.5 C
New York
Tuesday, November 26, 2024

Introductory time-series forecasting with torch


That is the primary put up in a sequence introducing time-series forecasting with torch. It does assume some prior expertise with torch and/or deep studying. However so far as time sequence are involved, it begins proper from the start, utilizing recurrent neural networks (GRU or LSTM) to foretell how one thing develops in time.

On this put up, we construct a community that makes use of a sequence of observations to foretell a price for the very subsequent time limit. What if we’d prefer to forecast a sequence of values, equivalent to, say, per week or a month of measurements?

One factor we may do is feed again into the system the beforehand forecasted worth; that is one thing we’ll strive on the finish of this put up. Subsequent posts will discover different choices, a few of them involving considerably extra complicated architectures. Will probably be attention-grabbing to check their performances; however the important aim is to introduce some torch “recipes” you could apply to your personal information.

We begin by analyzing the dataset used. It’s a low-dimensional, however fairly polyvalent and complicated one.

The vic_elec dataset, out there by way of package deal tsibbledata, supplies three years of half-hourly electrical energy demand for Victoria, Australia, augmented by same-resolution temperature info and a each day vacation indicator.

Rows: 52,608
Columns: 5
$ Time        <dttm> 2012-01-01 00:00:00, 2012-01-01 00:30:00, 2012-01-01 01:00:00,…
$ Demand      <dbl> 4382.825, 4263.366, 4048.966, 3877.563, 4036.230, 3865.597, 369…
$ Temperature <dbl> 21.40, 21.05, 20.70, 20.55, 20.40, 20.25, 20.10, 19.60, 19.10, …
$ Date        <date> 2012-01-01, 2012-01-01, 2012-01-01, 2012-01-01, 2012-01-01, 20…
$ Vacation     <lgl> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRU…

Relying on what subset of variables is used, and whether or not and the way information is temporally aggregated, these information could serve for instance a wide range of completely different strategies. For instance, within the third version of Forecasting: Ideas and Follow each day averages are used to show quadratic regression with ARMA errors. On this first introductory put up although, in addition to in most of its successors, we’ll try and forecast Demand with out counting on further info, and we maintain the unique decision.

To get an impression of how electrical energy demand varies over completely different timescales. Let’s examine information for 2 months that properly illustrate the U-shaped relationship between temperature and demand: January, 2014 and July, 2014.

First, right here is July.

vic_elec_2014 <-  vic_elec %>%
  filter(yr(Date) == 2014) %>%
  choose(-c(Date, Vacation)) %>%
  mutate(Demand = scale(Demand), Temperature = scale(Temperature)) %>%
  pivot_longer(-Time, names_to = "variable") %>%
  update_tsibble(key = variable)

vic_elec_2014 %>% filter(month(Time) == 7) %>% 
  autoplot() + 
  scale_colour_manual(values = c("#08c5d1", "#00353f")) +
  theme_minimal()

Temperature and electricity demand (normalized). Victoria, Australia, 07/2014.

Determine 1: Temperature and electrical energy demand (normalized). Victoria, Australia, 07/2014.

It’s winter; temperature fluctuates beneath common, whereas electrical energy demand is above common (heating). There may be robust variation over the course of the day; we see troughs within the demand curve equivalent to ridges within the temperature graph, and vice versa. Whereas diurnal variation dominates, there is also variation over the times of the week. Between weeks although, we don’t see a lot distinction.

Evaluate this with the information for January:

vic_elec_2014 %>% filter(month(Time) == 1) %>% 
  autoplot() + 
  scale_colour_manual(values = c("#08c5d1", "#00353f")) +
  theme_minimal()

Temperature and electricity demand (normalized). Victoria, Australia, 01/2014.

Determine 2: Temperature and electrical energy demand (normalized). Victoria, Australia, 01/2014.

We nonetheless see the robust circadian variation. We nonetheless see some day-of-week variation. However now it’s excessive temperatures that trigger elevated demand (cooling). Additionally, there are two durations of unusually excessive temperatures, accompanied by distinctive demand. We anticipate that in a univariate forecast, not taking into consideration temperature, this will probably be exhausting – and even, unattainable – to forecast.

Let’s see a concise portrait of how Demand behaves utilizing feasts::STL(). First, right here is the decomposition for July:

vic_elec_2014 <-  vic_elec %>%
  filter(yr(Date) == 2014) %>%
  choose(-c(Date, Vacation))

cmp <- vic_elec_2014 %>% filter(month(Time) == 7) %>%
  mannequin(STL(Demand)) %>% 
  elements()

cmp %>% autoplot()

STL decomposition of electricity demand. Victoria, Australia, 07/2014.

Determine 3: STL decomposition of electrical energy demand. Victoria, Australia, 07/2014.

And right here, for January:


STL decomposition of electricity demand. Victoria, Australia, 01/2014.

Determine 4: STL decomposition of electrical energy demand. Victoria, Australia, 01/2014.

Each properly illustrate the robust circadian and weekly seasonalities (with diurnal variation considerably stronger in January). If we glance carefully, we will even see how the pattern part is extra influential in January than in July. This once more hints at a lot stronger difficulties predicting the January than the July developments.

Now that we now have an concept what awaits us, let’s start by making a torch dataset.

Here’s what we intend to do. We wish to begin our journey into forecasting through the use of a sequence of observations to foretell their speedy successor. In different phrases, the enter (x) for every batch merchandise is a vector, whereas the goal (y) is a single worth. The size of the enter sequence, x, is parameterized as n_timesteps, the variety of consecutive observations to extrapolate from.

The dataset will mirror this in its .getitem() methodology. When requested for the observations at index i, it should return tensors like so:

record(
      x = self$x[start:end],
      y = self$x[end+1]
)

the place begin:finish is a vector of indices, of size n_timesteps, and finish+1 is a single index.

Now, if the dataset simply iterated over its enter so as, advancing the index separately, these traces may merely learn

record(
      x = self$x[i:(i + self$n_timesteps - 1)],
      y = self$x[self$n_timesteps + i]
)

Since many sequences within the information are related, we will scale back coaching time by making use of a fraction of the information in each epoch. This may be completed by (optionally) passing a sample_frac smaller than 1. In initialize(), a random set of begin indices is ready; .getitem() then simply does what it usually does: search for the (x,y) pair at a given index.

Right here is the entire dataset code:

elec_dataset <- dataset(
  identify = "elec_dataset",
  
  initialize = perform(x, n_timesteps, sample_frac = 1) {

    self$n_timesteps <- n_timesteps
    self$x <- torch_tensor((x - train_mean) / train_sd)
    
    n <- size(self$x) - self$n_timesteps 
    
    self$begins <- kind(pattern.int(
      n = n,
      dimension = n * sample_frac
    ))

  },
  
  .getitem = perform(i) {
    
    begin <- self$begins[i]
    finish <- begin + self$n_timesteps - 1
    
    record(
      x = self$x[start:end],
      y = self$x[end + 1]
    )

  },
  
  .size = perform() {
    size(self$begins) 
  }
)

You might have observed that we normalize the information by globally outlined train_mean and train_sd. We but should calculate these.

The way in which we break up the information is simple. We use the entire of 2012 for coaching, and all of 2013 for validation. For testing, we take the “troublesome” month of January, 2014. You’re invited to check testing outcomes for July that very same yr, and evaluate performances.

vic_elec_get_year <- perform(yr, month = NULL) {
  vic_elec %>%
    filter(yr(Date) == yr, month(Date) == if (is.null(month)) month(Date) else month) %>%
    as_tibble() %>%
    choose(Demand)
}

elec_train <- vic_elec_get_year(2012) %>% as.matrix()
elec_valid <- vic_elec_get_year(2013) %>% as.matrix()
elec_test <- vic_elec_get_year(2014, 1) %>% as.matrix() # or 2014, 7, alternatively

train_mean <- imply(elec_train)
train_sd <- sd(elec_train)

Now, to instantiate a dataset, we nonetheless want to choose sequence size. From prior inspection, per week looks as if a good selection.

n_timesteps <- 7 * 24 * 2 # days * hours * half-hours

Now we will go forward and create a dataset for the coaching information. Let’s say we’ll make use of fifty% of the information in every epoch:

train_ds <- elec_dataset(elec_train, n_timesteps, sample_frac = 0.5)
size(train_ds)
 8615

Fast verify: Are the shapes right?

$x
torch_tensor
-0.4141
-0.5541
[...]       ### traces eliminated by me
 0.8204
 0.9399
... [the output was truncated (use n=-1 to disable)]
[ CPUFloatType{336,1} ]

$y
torch_tensor
-0.6771
[ CPUFloatType{1} ]

Sure: That is what we wished to see. The enter sequence has n_timesteps values within the first dimension, and a single one within the second, equivalent to the one characteristic current, Demand. As meant, the prediction tensor holds a single worth, corresponding– as we all know – to n_timesteps+1.

That takes care of a single input-output pair. As normal, batching is organized for by torch’s dataloader class. We instantiate one for the coaching information, and instantly once more confirm the result:

batch_size <- 32
train_dl <- train_ds %>% dataloader(batch_size = batch_size, shuffle = TRUE)
size(train_dl)

b <- train_dl %>% dataloader_make_iter() %>% dataloader_next()
b
$x
torch_tensor
(1,.,.) = 
  0.4805
  0.3125
[...]       ### traces eliminated by me
 -1.1756
 -0.9981
... [the output was truncated (use n=-1 to disable)]
[ CPUFloatType{32,336,1} ]

$y
torch_tensor
 0.1890
 0.5405
[...]       ### traces eliminated by me
 2.4015
 0.7891
... [the output was truncated (use n=-1 to disable)]
[ CPUFloatType{32,1} ]

We see the added batch dimension in entrance, leading to general form (batch_size, n_timesteps, num_features). That is the format anticipated by the mannequin, or extra exactly, by its preliminary RNN layer.

Earlier than we go on, let’s rapidly create datasets and dataloaders for validation and take a look at information, as properly.

valid_ds <- elec_dataset(elec_valid, n_timesteps, sample_frac = 0.5)
valid_dl <- valid_ds %>% dataloader(batch_size = batch_size)

test_ds <- elec_dataset(elec_test, n_timesteps)
test_dl <- test_ds %>% dataloader(batch_size = 1)

The mannequin consists of an RNN – of sort GRU or LSTM, as per the person’s selection – and an output layer. The RNN does many of the work; the single-neuron linear layer that outputs the prediction compresses its vector enter to a single worth.

Right here, first, is the mannequin definition.

mannequin <- nn_module(
  
  initialize = perform(sort, input_size, hidden_size, num_layers = 1, dropout = 0) {
    
    self$sort <- sort
    self$num_layers <- num_layers
    
    self$rnn <- if (self$sort == "gru") {
      nn_gru(
        input_size = input_size,
        hidden_size = hidden_size,
        num_layers = num_layers,
        dropout = dropout,
        batch_first = TRUE
      )
    } else {
      nn_lstm(
        input_size = input_size,
        hidden_size = hidden_size,
        num_layers = num_layers,
        dropout = dropout,
        batch_first = TRUE
      )
    }
    
    self$output <- nn_linear(hidden_size, 1)
    
  },
  
  ahead = perform(x) {
    
    # record of [output, hidden]
    # we use the output, which is of dimension (batch_size, n_timesteps, hidden_size)
    x <- self$rnn(x)[[1]]
    
    # from the output, we solely need the ultimate timestep
    # form now's (batch_size, hidden_size)
    x <- x[ , dim(x)[2], ]
    
    # feed this to a single output neuron
    # last form then is (batch_size, 1)
    x %>% self$output() 
  }
  
)

Most significantly, that is what occurs in ahead().

  1. The RNN returns a listing. The record holds two tensors, an output, and a synopsis of hidden states. We discard the state tensor, and maintain the output solely. The excellence between state and output, or fairly, the way in which it’s mirrored in what a torch RNN returns, deserves to be inspected extra carefully. We’ll try this in a second.

  2. Of the output tensor, we’re fascinated with solely the ultimate time-step, although.

  3. Solely this one, thus, is handed to the output layer.

  4. Lastly, the mentioned output layer’s output is returned.

Now, a bit extra on states vs. outputs. Think about Fig. 1, from Goodfellow, Bengio, and Courville (2016).

Let’s fake there are three time steps solely, equivalent to (t-1), (t), and (t+1). The enter sequence, accordingly, consists of (x_{t-1}), (x_{t}), and (x_{t+1}).

At every (t), a hidden state is generated, and so is an output. Usually, if our aim is to foretell (y_{t+2}), that’s, the very subsequent remark, we wish to consider the entire enter sequence. Put in another way, we wish to have run by way of the entire equipment of state updates. The logical factor to do would thus be to decide on (o_{t+1}), for both direct return from ahead() or for additional processing.

Certainly, return (o_{t+1}) is what a Keras LSTM or GRU would do by default. Not so its torch counterparts. In torch, the output tensor includes all of (o). That is why, in step two above, we choose the only time step we’re fascinated with – particularly, the final one.

In later posts, we’ll make use of greater than the final time step. Generally, we’ll use the sequence of hidden states (the (h)s) as a substitute of the outputs (the (o)s). So chances are you’ll really feel like asking, what if we used (h_{t+1}) right here as a substitute of (o_{t+1})? The reply is: With a GRU, this could not make a distinction, as these two are equivalent. With LSTM although, it will, as LSTM retains a second, particularly, the “cell,” state.

On to initialize(). For ease of experimentation, we instantiate both a GRU or an LSTM primarily based on person enter. Two issues are value noting:

  • We cross batch_first = TRUE when creating the RNNs. That is required with torch RNNs after we wish to persistently have batch objects stacked within the first dimension. And we do need that; it’s arguably much less complicated than a change of dimension semantics for one sub-type of module.

  • num_layers can be utilized to construct a stacked RNN, equivalent to what you’d get in Keras when chaining two GRUs/LSTMs (the primary one created with return_sequences = TRUE). This parameter, too, we’ve included for fast experimentation.

Let’s instantiate a mannequin for coaching. Will probably be a single-layer GRU with thirty-two items.

# coaching RNNs on the GPU at the moment prints a warning which will muddle 
# the console
# see https://github.com/mlverse/torch/points/461
# alternatively, use 
# gadget <- "cpu"
gadget <- torch_device(if (cuda_is_available()) "cuda" else "cpu")

internet <- mannequin("gru", 1, 32)
internet <- internet$to(gadget = gadget)

In any case these RNN specifics, the coaching course of is totally normal.

optimizer <- optim_adam(internet$parameters, lr = 0.001)

num_epochs <- 30

train_batch <- perform(b) {
  
  optimizer$zero_grad()
  output <- internet(b$x$to(gadget = gadget))
  goal <- b$y$to(gadget = gadget)
  
  loss <- nnf_mse_loss(output, goal)
  loss$backward()
  optimizer$step()
  
  loss$merchandise()
}

valid_batch <- perform(b) {
  
  output <- internet(b$x$to(gadget = gadget))
  goal <- b$y$to(gadget = gadget)
  
  loss <- nnf_mse_loss(output, goal)
  loss$merchandise()
  
}

for (epoch in 1:num_epochs) {
  
  internet$prepare()
  train_loss <- c()
  
  coro::loop(for (b in train_dl) {
    loss <-train_batch(b)
    train_loss <- c(train_loss, loss)
  })
  
  cat(sprintf("nEpoch %d, coaching: loss: %3.5f n", epoch, imply(train_loss)))
  
  internet$eval()
  valid_loss <- c()
  
  coro::loop(for (b in valid_dl) {
    loss <- valid_batch(b)
    valid_loss <- c(valid_loss, loss)
  })
  
  cat(sprintf("nEpoch %d, validation: loss: %3.5f n", epoch, imply(valid_loss)))
}
Epoch 1, coaching: loss: 0.21908 

Epoch 1, validation: loss: 0.05125 

Epoch 2, coaching: loss: 0.03245 

Epoch 2, validation: loss: 0.03391 

Epoch 3, coaching: loss: 0.02346 

Epoch 3, validation: loss: 0.02321 

Epoch 4, coaching: loss: 0.01823 

Epoch 4, validation: loss: 0.01838 

Epoch 5, coaching: loss: 0.01522 

Epoch 5, validation: loss: 0.01560 

Epoch 6, coaching: loss: 0.01315 

Epoch 6, validation: loss: 0.01374 

Epoch 7, coaching: loss: 0.01205 

Epoch 7, validation: loss: 0.01200 

Epoch 8, coaching: loss: 0.01155 

Epoch 8, validation: loss: 0.01157 

Epoch 9, coaching: loss: 0.01118 

Epoch 9, validation: loss: 0.01096 

Epoch 10, coaching: loss: 0.01070 

Epoch 10, validation: loss: 0.01132 

Epoch 11, coaching: loss: 0.01003 

Epoch 11, validation: loss: 0.01150 

Epoch 12, coaching: loss: 0.00943 

Epoch 12, validation: loss: 0.01106 

Epoch 13, coaching: loss: 0.00922 

Epoch 13, validation: loss: 0.01069 

Epoch 14, coaching: loss: 0.00862 

Epoch 14, validation: loss: 0.01125 

Epoch 15, coaching: loss: 0.00842 

Epoch 15, validation: loss: 0.01095 

Epoch 16, coaching: loss: 0.00820 

Epoch 16, validation: loss: 0.00975 

Epoch 17, coaching: loss: 0.00802 

Epoch 17, validation: loss: 0.01120 

Epoch 18, coaching: loss: 0.00781 

Epoch 18, validation: loss: 0.00990 

Epoch 19, coaching: loss: 0.00757 

Epoch 19, validation: loss: 0.01017 

Epoch 20, coaching: loss: 0.00735 

Epoch 20, validation: loss: 0.00932 

Epoch 21, coaching: loss: 0.00723 

Epoch 21, validation: loss: 0.00901 

Epoch 22, coaching: loss: 0.00708 

Epoch 22, validation: loss: 0.00890 

Epoch 23, coaching: loss: 0.00676 

Epoch 23, validation: loss: 0.00914 

Epoch 24, coaching: loss: 0.00666 

Epoch 24, validation: loss: 0.00922 

Epoch 25, coaching: loss: 0.00644 

Epoch 25, validation: loss: 0.00869 

Epoch 26, coaching: loss: 0.00620 

Epoch 26, validation: loss: 0.00902 

Epoch 27, coaching: loss: 0.00588 

Epoch 27, validation: loss: 0.00896 

Epoch 28, coaching: loss: 0.00563 

Epoch 28, validation: loss: 0.00886 

Epoch 29, coaching: loss: 0.00547 

Epoch 29, validation: loss: 0.00895 

Epoch 30, coaching: loss: 0.00523 

Epoch 30, validation: loss: 0.00935 

Loss decreases rapidly, and we don’t appear to be overfitting on the validation set.

Numbers are fairly summary, although. So, we’ll use the take a look at set to see how the forecast really seems to be.

Right here is the forecast for January, 2014, thirty minutes at a time.

internet$eval()

preds <- rep(NA, n_timesteps)

coro::loop(for (b in test_dl) {
  output <- internet(b$x$to(gadget = gadget))
  preds <- c(preds, output %>% as.numeric())
})

vic_elec_jan_2014 <-  vic_elec %>%
  filter(yr(Date) == 2014, month(Date) == 1) %>%
  choose(Demand)

preds_ts <- vic_elec_jan_2014 %>%
  add_column(forecast = preds * train_sd + train_mean) %>%
  pivot_longer(-Time) %>%
  update_tsibble(key = identify)

preds_ts %>%
  autoplot() +
  scale_colour_manual(values = c("#08c5d1", "#00353f")) +
  theme_minimal()

One-step-ahead predictions for January, 2014.

Determine 6: One-step-ahead predictions for January, 2014.

General, the forecast is superb, however it’s attention-grabbing to see how the forecast “regularizes” probably the most excessive peaks. This type of “regression to the imply” will probably be seen way more strongly in later setups, after we attempt to forecast additional into the longer term.

Can we use our present structure for multi-step prediction? We are able to.

One factor we will do is feed again the present prediction, that’s, append it to the enter sequence as quickly as it’s out there. Successfully thus, for every batch merchandise, we receive a sequence of predictions in a loop.

We’ll attempt to forecast 336 time steps, that’s, an entire week.

n_forecast <- 2 * 24 * 7

test_preds <- vector(mode = "record", size = size(test_dl))

i <- 1

coro::loop(for (b in test_dl) {
  
  enter <- b$x
  output <- internet(enter$to(gadget = gadget))
  preds <- as.numeric(output)
  
  for(j in 2:n_forecast) {
    enter <- torch_cat(record(enter[ , 2:length(input), ], output$view(c(1, 1, 1))), dim = 2)
    output <- internet(enter$to(gadget = gadget))
    preds <- c(preds, as.numeric(output))
  }
  
  test_preds[[i]] <- preds
  i <<- i + 1
  
})

For visualization, let’s decide three non-overlapping sequences.

test_pred1 <- test_preds[[1]]
test_pred1 <- c(rep(NA, n_timesteps), test_pred1, rep(NA, nrow(vic_elec_jan_2014) - n_timesteps - n_forecast))

test_pred2 <- test_preds[[408]]
test_pred2 <- c(rep(NA, n_timesteps + 407), test_pred2, rep(NA, nrow(vic_elec_jan_2014) - 407 - n_timesteps - n_forecast))

test_pred3 <- test_preds[[817]]
test_pred3 <- c(rep(NA, nrow(vic_elec_jan_2014) - n_forecast), test_pred3)


preds_ts <- vic_elec %>%
  filter(yr(Date) == 2014, month(Date) == 1) %>%
  choose(Demand) %>%
  add_column(
    iterative_ex_1 = test_pred1 * train_sd + train_mean,
    iterative_ex_2 = test_pred2 * train_sd + train_mean,
    iterative_ex_3 = test_pred3 * train_sd + train_mean) %>%
  pivot_longer(-Time) %>%
  update_tsibble(key = identify)

preds_ts %>%
  autoplot() +
  scale_colour_manual(values = c("#08c5d1", "#00353f", "#ffbf66", "#d46f4d")) +
  theme_minimal()

Multi-step predictions for January, 2014, obtained in a loop.

Determine 7: Multi-step predictions for January, 2014, obtained in a loop.

Even with this very primary forecasting method, the diurnal rhythm is preserved, albeit in a strongly smoothed kind. There even is an obvious day-of-week periodicity within the forecast. We do see, nevertheless, very robust regression to the imply, even in loop situations the place the community was “primed” with a better enter sequence.

Hopefully this put up supplied a helpful introduction to time sequence forecasting with torch. Evidently, we picked a difficult time sequence – difficult, that’s, for a minimum of two causes:

  • To appropriately issue within the pattern, exterior info is required: exterior info in type of a temperature forecast, which, “in actuality,” can be simply obtainable.

  • Along with the extremely essential pattern part, the information are characterised by a number of ranges of seasonality.

Of those, the latter is much less of an issue for the strategies we’re working with right here. If we discovered that some degree of seasonality went undetected, we may attempt to adapt the present configuration in plenty of uncomplicated methods:

  • Use an LSTM as a substitute of a GRU. In idea, LSTM ought to higher be capable to seize further lower-frequency elements because of its secondary storage, the cell state.

  • Stack a number of layers of GRU/LSTM. In idea, this could enable for studying a hierarchy of temporal options, analogously to what we see in a convolutional neural community.

To handle the previous impediment, greater adjustments to the structure can be wanted. We could try to try this in a later, “bonus,” put up. However within the upcoming installments, we’ll first dive into often-used strategies for sequence prediction, additionally porting to numerical time sequence issues which might be generally performed in pure language processing.

Thanks for studying!

Picture by Nick Dunn on Unsplash

Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. Deep Studying. MIT Press.

Related Articles

Latest Articles