This is something I've noticed.
BSD like Linux uses a "Swap Partition" on the hard drive to serve as slow RAM when you have to little memory to keep up with your computing. So it uses virtual memory
Windows has a Page file that seems to searve much the same purpose to my knolledge (intell would be nice). This paging memory likewise is supplimenting RAM.
When I run BSD, I use oh most no swap space and have many processes, According to my system monitor (I have 12 programs running counting system tray and task bar contents). My laptop (PCBSD) has around 150 processes going at this very moment with 0 Swap space used (checked in system monitor and swap space checking program). System is a AMD Mobile Sempron 3300+ (2Ghz/128KB L2 /w 512DDR)
At any given moment my desktop while running windows has about 50 processes (5-10 programs running between taskbar and systray) Task Manager shows that I'm usually using 100-250MB of my page file, over 500MB during a game. When I run PCBSD on the same desktop it has a simular load to my laptop and no swap use, Yet they both remain more responsive when launching/switching programs. under PCBSD then Windows. The desktop is a Pentium D (3.0Ghz/2MB L2 times 2 core /w 2GB DDR2).
My windows page file is setup as a 4GB file, performace favoring is windows defaults (favores using resources on user space if I recall).
I'm not a large fan of virtual/paging memory but is their any way I can try to make my Windows system as fast as my BSD ones ?
Basically, XP is a bunch of crap because it'll always use it's pagefile, wether it has enough RAM or not. The fastest to go is a)have fast RAM b)have a 4Gb pagefile.
if you have more than one hard disk, set the pagefile to a drive without windows installed, that way it wont slow windows, and 50!! processes, jeez close some of them.
Lt_Col WIZ, VC, MiD (Ret)
Right now I am running 40 Processes and only activly using Firefox - 6 things in the systray, X-Fire and necessary evils; Creative/nVidia tools, Bigfix, AV & FW + X-F.
I'm using 261MB of Page file, I think this was all well and good back in the days of 16MB of RAM but come on. If you've got enough RAM to cover your usuage why use slower cheaper hard drive?
PS: I have one 500GB Hard drive sliced up into Windows C drive (180GB NTFS), BSD partition (120GB UFS2), a Development BSD Partition (85GB UFS2), and an extended partition with 3 logical drives; 4GB I can use for a Linux Swap partition if I ever need to run Linux, 32GB FAT32 for my personal files, 25GB for storing backups.
of course if you are confident that you have enough RAM, you can turn pagefile off.
Lt_Col WIZ, VC, MiD (Ret)
Maybe if I had a 3rd GB of RAM.
To say that the page file is only used when there is not enough RAM available is not true. At least not on Windows.
When it has idle time, among others it will do these two things:
1. Rearrange the contents of memory to reduce fragmentation.
2. Copy data from RAM to the page file.
Number 1 is straight forward, and similar to fragmentation on a hard disk.
Number 2 does mean copy and not move. Provided there is plenty of memory, the data will be both in memory and on disk in the page file. If there continues to be plenty of memory then this has little effect because when the data is needed it will be used straight from memory.
But if you start up another application or whatever and there is no longer enough free memory, then something can be overwritten in memory WITHOUT having to write it to disk (page file) first. That means it is extremely quick, when otherwise there would have been a huge delay while some unneeded data was swapped out to disk. It's already there.
This means that without any significant delay (disk access) a new process (or an existing one that needs more memory) can have a very large ammount of memory (RAM not disk) almost instantly whenever it needs it, even when there isn't any free. Magic.
The only downside is that it puts some extra wear onto the hard disk, but with the quality of modern HDs that will not be significant.
.. you would be if you knew what it is for and how it works.
I only have 1/2 a gig of RAM, but a huge pagefile. Maybe thats why i can get away with resource hungry applications?
Lt_Col WIZ, VC, MiD (Ret)
strange, cuz I have 1 gig of ram and a 4 gig pagefile...
Do you have it configured to fixed (custom) size or system managed size? Fixed I'm guessing.
My general understanding of virtual memory is thus:
Normal memory consists of the CPU and it's cache(s), the dynamic RAM with speeds slowing as it reachs RAM but still quite fast their compared to disk. Since programs can hog more memory then this can provide hard drive space is used so that it appears their is actually more overall memory then their really is. Some data is stored on disk and is moved back & forth to physical memory as needed by what ever method is used by the system. Virtual memory makes it look as if the computer has more phsyical memory then it does. What i mean is, the programmer dosn't have to meddle with what he wants RAM or Disk the OS/MMU will figure it out and make it trainsparent to him/her giving the ammount of memory required.
If you give a hoot about speed and have lots of stuff moving around in memory you don't want to make it "Swap" more then you have to. Without virtual memory I suppose we'd have to deal with that upper/lower memory crap in the days of DOS, something about 640K of 1MB for running programs or something I can't remember it to well.
Linux/BSD prefer swap partitions to swap files which should be put in the fastest section of the disk, and I've heard it keeps some fragmentation away from the root file system. I've also heard that windows pagefile can be locked at a fixed size rather then system controlled to "in theroy" stop it from getting too fragmented with heavy useage. At the price that if you run out of RAM and Pagefile the program/or windows is gonna cry. I remember reading that BSD only uses swap when their is not enough RAM to handle it, which is unlike Windows in my expience. Still it gives a influx of cheap memory to the OS, more or less as I recall some former study on UNIX-Like systems.
I did not say windows page file is not used untill RAM is full, nore did I state that about BSD in my first post.
According to what I've read from MicroSofts "RAM, Virtual Memory, Pagefile and all that stuff"
32-Bit Windows works with VM addresses up to 4GB iregardless of yoru memory.
Since Virtal Memory seems to be considered as much a comodity as the size of a Harddisk. Each app can be made to feel it's got a heck of allot of memory to it's lonesome and when you get allot of RAM Drain it'll page out VM Address space 4KB at a time to the pagefile to free the RAM.
With my HDD I don't care much about the mass of the page file, but I'd prefer Windows to use my RAM as much as possible since it's faster then the hard drive.
I'm no hacker, but I try to learn as much as I can, if my thinking is out of train I'd love it if you could clarify.
The fact still remains however that BSD is allot more snappy for me then windows is under lighter loads.
If you don't like being called a geek, look away now. :nerd:
I'll try to give a simple but more detailed description of Virtual Memory.
We have several goals that lead us to use Virtual Memory.
We want to enforce Protection - that is, we want to stop one process from editing the memory being used by another process, think viruses and stuff.
We don't want to run out of memory, even though we don't have much because it is expensive.
We want to make the most out of the memory that we do have.
Virtual Memory and derivatives (mostly Demand Paged Virtual Memory) help us to achieve all these goals.
========================================
My guide to Virtual Memory.
Whenever a process (a running program) accesses anything in memory, it says to the OS "get me the contents of memory at address #######".
The problems with that are so many. How can you keep track of who owns which pieces of memory? How can you assign who gets what fairly? Once you give a piece of memory to a process, how do you get it back without breaking the process if you need it? Basically you can't do it properly with a shared memory address space.
With Virtual Memory, the OS gives each and every process the illusion that it is the only process. Each process thinks it has its own memory of for example 4GB completely for its own use. It can access memory addresses from 0 to 4GB and no other process can get in the way. We have Protection automatically.
A diagram might help.
So the job the OS does is to keep track, for each process, of which VM addresses map to which physical addresses, so that whan a process requests a VM address, it can work out which physical address (if any) the data is stored in.
To simplify that job, we group addresses together into blocks. In Virtual Memory these are called pages. The blocks of physical memory to which they refer are called frames. The standard size for these is 4KB because that is the standard size of a disk block, making it efficient when you swap them to disk.
So, the OS can now do a whole load of cool things and the processes will never know. In particular, it can put the physical frames (the real data) wherever it likes in memory, on disk, or anywhere else it feels like, without telling the process, so long as it can retrieve them when needed.
The interesting parts are:
= How to make the conversion between VM addresses and physical addresses efficient.
= How to decide when to write frames out to disk.
= How to decide which frames to write out to disk.
= How to cope with processes allocating and freeing Virtual Memory.
= What to do when a process requests data that is actually on disk and not in memory. You may have to push some other frame out to disk to make space for the one you want (hence "swap").
= What other clever things can you do to make it all more efficient?
You can spend a lifetime learning about this stuff. The more you find out the more you realise there is to know. It gets very interesting, there are some really devious ways to do the above things, and I encourage you to go and read about it in an Operating Systems book (the kind that is at least 2 inches thick) as I have done.
One more mystery I can explain, is that the number the Windows Task Manager (performance tab) calls "PF Usage" is actually not the amount of page file being used.
In fact, it is equal to the "Commit Charge" which is the total amount of data that exists, added up from all the processes, whether it is in page file or in RAM.
(to get the numbers to match, divide your total commit charge by 1024 and you'll get "PF Usage")
Finding out how much page file is actually in use is more difficult, you can do it using the Performance tool below.
Remember that the things given in the task manager are a gross simplification of the truth. To get the good stuff you need to go to Control Panel, Administration Tools, Performance. Right click on the graph to add different things. Also in Task Manager -> Processes, you can click View -> Select Columns to display some more stats.
Very interesting and a good summery of microsofts handling. I've long learned that if no one is to teach you, you had best start teaching your self.
Very intriging, also very disinhearting as idle useage of page file seems next to null for intents and purposes.
Hmmm I do dearly wonder how on earth windows can still manage to be out performed.
My long post with the diagram in it is not Microsoft or Windows specific. It's the same for UNIX/Linux.
Details beyond what I posted will vary between OSes but the principles I gave in my post apply to both.
And the award for most words used in a post goes to....
Lt_Col WIZ, VC, MiD (Ret)
Maybe James Michener?? After all he wrote a 1500page novel.
True James, but it does follow the windows description on the detailed points well enough as I implied.