You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you are building your application using the golem (Fay et al. 2021) framework, you are building your application as a package. R packages provide a way to include internal datasets, which can then be used as objects inside your app. This is the solution you should go for if your data are never to rarely updated: the datasets are created during package development, then included inside the build of your package. The plus side of this approach is that it makes the data fast to read, as they are serialized as R native objects.
What about datasets that are frequently updated. For example, a large dataset that I want to refresh on a daily basis and read locally from disk for faster read time? Is there a recommended workflow for golem applications that require on-disk data that is updated often? Just ./data-raw plus ./data, and then ignore the large files when building the package?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
From: https://engineering-shiny.org/common-app-caveats.html?q=data#including-data-in-your-application
What about datasets that are frequently updated. For example, a large dataset that I want to refresh on a daily basis and read locally from disk for faster read time? Is there a recommended workflow for golem applications that require on-disk data that is updated often? Just
./data-raw
plus./data
, and then ignore the large files when building the package?Beta Was this translation helpful? Give feedback.
All reactions