Why data structures – and algorithms – if we can relegate everything to disk I/O? If performance isn’t a concern and hard drives are cheap, like with automation, we could get away it. The operating system would schedule our tasks, allocate the memory, and clean up afterward. Years and years of development time has gone into making processes Just Work; why mess with a good thing? (Embrace the command-line; embrace the shell.)
We go back to our original assumptions: performance matters, and storage isn’t cheap. Supposing this, why would we want to “wear RAM out” by keeping programs long-lived, sipping on current and squatting in turns? That more expensive memory doesn’t have moving parts, for one. For another, a crashing program might necessitate a reboot, but a crashed disk means mechanical replacement, lost data, and so on.
Our professors could have said, “You are learning about linked lists, stacks, trees and queues because you will write artifacts to eternity. Such will be your authorship that these processes will outlast workdays and weekends. Scheduled tasks are as the brief flame of a candle; you will write slow-burning stars, fusion suns.”
That leans more toward systems programming. There’s no other process we would want to fail than those in user-space; the kernel and the daemons need to last as long as power drives the fans, please.
What kinds of user-based programs would benefit from long-running processes? Anything having to do with real-time, although those would seem to stick very close to the system, bare-metal layer. The criteria would be a program that exercises the allocation of dynamic memory, creating and deleting as is wont. (That still sounds like server-side to me.)
Spreadsheets are one example of a user-side, long-running process. You could have Excel open all day. A few people could be attacking a shared workbook at once. We really take copy-paste for granted. None of us bat an eye at transferring the goods: if it looks like a table, we trust Excel to understand it. Rows are inserted, cells are deleted, macros are executed.
Games are another example. You don’t want memory leaks forcing inopportune reboots. Gamers are likely to keep the thing running while their GPUs whine, but that’s the whole point of having enthusiast hardware. Even if most communication is client-server, the game needs to be receptive to controller inputs. It needs to manage a relentless game loop and maintain responsiveness.
How does this all relate to treating people as decision-making interpreters of a business process “language?” What is the best way for them to remain alert and receptive to status changes and aberrant unpredictables? If they are the gatekeepers of data transforms, I imagine stuffing everything into a database and rigging up facades. Is that the easy way out?
With enough statistical interaction, can a program learn a process? Is it all just gun-kata?