Tutorial :In R, how do you loop over the rows of a data frame really fast?


Suppose that you have a data frame with many rows and many columns.

The columns have names. You want to access rows by number, and columns by name.

For example, one (possibly slow) way to loop over the rows is

for (i in 1:nrow(df)) {    print(df[i, "column1"])    # do more things with the data frame...  }  

Another way is to create "lists" for separate columns (like column1_list = df[["column1"]), and access the lists in one loop. This approach might be fast, but also inconvenient if you want to access many columns.

Is there a fast way of looping over the rows of a data frame? Is some other data structure better for looping fast?


I think I need to make this a full answer because I find comments harder to track and I already lost one comment on this... There is an example by nullglob that demonstrates the differences among for, and apply family functions much better than other examples. When one makes the function such that it is very slow then that's where all the speed is consumed and you won't find differences among the variations on looping. But when you make the function trivial then you can see how much the looping influences things.

I'd also like to add that some members of the apply family unexplored in other examples have interesting performance properties. First I'll show replications of nullglob's relative results on my machine.

n <- 1e6  system.time(for(i in 1:n) sinI[i] <- sin(i))    user  system elapsed    5.721   0.028   5.712     lapply runs much faster for the same result  system.time(sinI <- lapply(1:n,sin))     user  system elapsed     1.353   0.012   1.361   

He also found sapply much slower. Here are some others that weren't tested.

Plain old apply to a matrix version of the data...

mat <- matrix(1:n,ncol =1),1,sin)  system.time(sinI <- apply(mat,1,sin))     user  system elapsed     8.478   0.116   8.531   

So, the apply() command itself is substantially slower than the for loop. (for loop is not slowed down appreciably if I use sin(mat[i,1]).

Another one that doesn't seem to be tested in other posts is tapply.

system.time(sinI <- tapply(1:n, 1:n, sin))     user  system elapsed    12.908   0.266  13.589   

Of course, one would never use tapply this way and it's utility is far beyond any such speed problem in most cases.


The fastest way is to not loop (i.e. vectorized operations). One of the only instances in which you need to loop is when there are dependencies (i.e. one iteration depends on another). Otherwise, try to do as much vectorized computation outside the loop as possible.

If you do need to loop, then using a for loop is essentially as fast as anything else (lapply can be a little faster, but other apply functions tend to be around the same speed as for).


Exploiting the fact that data.frames are essentially lists of column vectors, one can use do.call to apply a function with the arity of the number of columns over each column of the data.frame (similar to a "zipping" over a list in other languages).

do.call(paste, data.frame(x=c(1,2), z=c("a","b"), z=c(5,6)))  

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »