Desfragmentación de Disco en Windows 7

Estado
Cerrado para nuevas respuestas.
Un excelente artículo sobre las nuevas mejoras en cuanto a defragmentación de disco en Windows 7.

One of the features that you’ve been pretty clear about (I’ve received over 100 emails on this topic!) is the desire to improve the disk defrag utility in Windows 7. We did. And from blogs we saw a few of you noticed, which is great. This is not as straight forward as it may appear. We know there’s a lot of history in defrag and how “back in the day” it was a very significant performance issue and also a big mystery to most people. So many folks came to know that if your machine is slow you had to go through the top-secret defrag process. In Windows Vista we decided to just put the process on autopilot with the intent that you’d never have to worry about it. In practice this turns out to be true, at least to the limits of automatically running a process (that is if you turn your machine off every night then it will never run). We received a lot of feedback from knowledgeable folks wanting more information on defrag status, especially during execution, as well as more flexibility in terms of the overall management of the process. This post will detail the changes we made based on that feedback. In reading the mail and comments we received, we also thought it would be valuable to go into a little bit more detail about the process, the perceptions and reality of performance gains, as well as the specific improvements. This post is by Rajeev Nagar and Matt Garson, both are Program Managers on our File System feature team. --Steven

In this blog, we focus on disk defragmentation in Windows 7. Before we discuss the changes introduced in Windows 7, let’s chat a bit about what fragmentation is, and its applicability.

Within the storage and memory hierarchy comprising the hardware pipeline between the hard disk and CPU, hard disks are relatively slower and have relatively higher latency. Read/write times from and to a hard disk are measured in milliseconds (typically, 2-5 ms) – which sounds quite fast until compared to a 2GHz CPU that can compute data in less than 10 nanoseconds (on average), once the data is in the L1 memory cache of the processor.

This performance gap has only been increasing over the past 2 decades – the figures below are noteworthy.


In short, the figures illustrate that while disk capacities are increasing, their ability to transfer data or write new data is not increasing at an equivalent rate – so disks contain more data that takes longer to read or write. Consequently, fast CPUs are relatively idle, waiting for data to do work on.

Significant research in Computer Science has focused on improving overall system I/O performance, which has lead to two principles that the operating system tries to follow:

  1. Perform less I/O, i.e. try and minimize the number of times a disk read or write request is issued.
  2. When I/O is issued, transfer data in relatively large chunks, i.e. read or write in bulk.
Both rules have reasonably simply understood rationale:

  1. Each time an I/O is issued by the CPU, multiple software and hardware components have to do work to satisfy the request. This contributes toward increased latency, i.e., the amount of time until the request is satisfied. This latency is often directly experienced by users when reading data and leads to increased user frustration if expectations are not met.
  2. Movement of mechanical parts contributes substantially to incurred latency. For hard disks, the “rotational time” (time taken for the disk platter to rotate in order to get the right portion of the disk positioned under the disk head) and the “seek time” (time taken by the head to move so that it is positioned to be able to read/write the targeted track) are the two major culprits. By reading or writing in large chunks, the incurred costs are amortized over the larger amount of data that is transferred – in other words, the “per unit” data transfer costs decrease.
File systems such as NTFS work quite hard to try and satisfy the above rules. As an example, consider the case when I listen to the song “Hotel California” by the Eagles (one of my all time favorite bands). When I first save the 5MB file to my NTFS volume, the file system will try and find enough contiguous free space to be able to place the 5MB of data “together” on the disk.

Since logically related data (e.g. contents of the same file or directory) is more likely to be read or written around the same time. For example, I would typically play the entire song “Hotel California” and not just a portion of it. During the 3 minutes that the song is playing, the computer would be fetching portions of this “related content” (i.e. sub-portions of the file) from the disk until the entire file is consumed. By making sure the data is placed together, the system can issue read requests in larger chunks (often pre-reading data in anticipation that it will soon be used) which, in turn, will minimize mechanical movement of hard disk drive components and also ensure fewer issued I/Os.

Given that the file system tries to place data contiguously, when does fragmentation occur? Modifications to stored data (e.g. adding, changing, or deleting content) cause changes in the on-disk data layout and can result in fragmentation. For example, file deletion naturally causes space de-allocation and resultant “holes” in the allocated space map – a condition we will refer to as “fragmentation of available free space”. Over time, contiguous free space becomes harder to find leading to fragmentation of newly stored content. Obviously, deletion is not the only cause of fragmentation – as mentioned above, other file operations such as modifying content in place or appending data to an existing file can eventually lead to the same condition.

So how does defragmentation help? In essence, defragmentation helps by moving data around so that it is once again placed more optimally on the hard disk, providing the following benefits:

  1. Any logically related content that was fragmented can be placed adjacently
  2. Free space can be coalesced so that new content written to the disk can be done so efficiently
The following diagram will help illustrate what we’re discussing. The first illustration represents an ideal state of a disk – there are 3 files, A, B, and C, and all are stored in contiguous locations; there is no fragmentation. The second illustration represents a fragmented disk – a portion of data associated with File A is now located in a non-contiguous location (due to growth of the file). The third illustration shows how data on the disk would look like once the disk was defragmented.


Nearly all modern file systems support defragmentation – the differences generally are in the defragmentation mechanism, whether, as in Windows, it’s a separate, schedulable task or, whether the mechanism is more implicitly managed and internal to the file system. The design decisions simply reflect the particular design goals of the system and the necessary tradeoffs. Furthermore, it’s unlikely that a general-purpose file system could be designed such that fragmentation never occurred.

Over the years, defragmentation has been given a lot of emphasis because, historically, fragmentation was a problem that could have more significant impact. In the early days of personal computing, when disk capacities were measured in megabytes, disks got full faster and fragmentation occurred more often.

Further, memory caches were significantly limited and system responsiveness was increasingly predicated on disk I/O performance. This got to a point that some users ran their defrag tool weekly or even more often! Today, very large disk drives are available cheaply and % disk utilization for the average consumer is likely to be lower causing relatively less fragmentation.

Further, computers can utilize more RAM cheaply (often, enough to be able to cache the data set actively in use). That together, with improvements in file system allocation strategies as well as caching and pre-fetching algorithms, further helps improve overall responsiveness.

Therefore, while the performance gap between the CPU and disks continues to grow and fragmentation does occur, combined hardware and software advances in other areas allow Windows to mitigate fragmentation impact and deliver better responsiveness.

So, how would we evaluate fragmentation given today’s software and hardware? A first question might be: how often does fragmentation actually occur and to what extent? After all, 500GB of data with 1% fragmentation is significantly different than 500GB with 50% fragmentation. Secondly, what is the actual performance penalty of fragmentation, given today’s hardware and software? Quite a few of you likely remember various products introduced over the past two decades offering various performance enhancements (e.g. RAM defragmentation, disk compression, etc.), many of which have since become obsolete due to hardware and software advances.

The incidence and extent of fragmentation in average home computers varies quite a bit depending on available disk capacity, disk consumption, and usage patterns. In other words, there is no general answer. The actual performance impact of fragmentation is the more interesting question but even more complex to accurately quantify. A meaningful evaluation of the performance penalty of fragmentation would require the following:

  • Availability of a system that has been “aged” to create fragmentation in a typical or representative manner. But, as noted above, there is no single, representative behavior. For example, the frequency and extent of fragmentation on a computer used primarily for web browsing will be different than a computer used as a file server.
  • Selection of meaningful disk-bound metrics e.g. boot and first-time application launch post boot.
  • Repeated measurements that can be statistically relevant
Let’s walk through an example that helps illustrate the complexity in directly correlating extent of fragmentation with user-visible performance.

In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. So, which one is correct? Well, before the question can be answered we must understand why defrag in Vista was changed.

In Vista, we analyzed the impact of defragmentation and determined that the most significant performance gains from defrag are when pieces of files are combined into sufficiently large chunks such that the impact of disk-seek latency is not significant relative to the latency associated with sequentially reading the file. This means that there is a point after which combining fragmented pieces of files has no discernible benefit.

In fact, there are actually negative consequences of doing so. For example, for defrag to combine fragments that are 64MB or larger requires significant amounts of disk I/O, which is against the principle of minimizing I/O that we discussed earlier (since it decreases total available disk bandwidth for user initiated I/O), and puts more pressure on the system to find large, contiguous blocks of free space. Here is a scenario where a certainly amount of fragmentation of data is just fine – doing nothing to decrease this fragmentation turns out to be the right answer!

Note that a concept that is relatively simple to understand, such as the amount of fragmentation and its impact, is in reality much more complex, and its real impact requires comprehensive evaluation of the entire system to accurately address. The different design decisions across Windows XP and Vista reflect this evaluation of the typical hardware & software environment used by customers. Ultimately, when thinking about defragmentation, it is important to realize that there are many additional factors contributing towards system responsiveness that must be considered beyond a simple count of existing fragments.

The defragmentation engine and experience in Windows 7 has been revamped based on continuous and holistic analysis of impact on system responsiveness:
In Windows Vista, we had removed all of the UI that would provide detailed defragmentation status. We received feedback that you didn’t like this decision, so we listened, evaluated the various tradeoffs, and have built a new GUI for defrag! As a result, in Windows 7, you can monitor status more easily and intuitively.

Further, defragmentation can be safely terminated any time during the process and on all volumes very simply (if required). The two screenshots below illustrate the ease-of-monitoring:



In Windows XP, defragmentation had to be a user-initiated (manual) activity i.e. it could not be scheduled. Windows Vista added the capability to schedule defragmentation – however, only one volume could be defragmented at any given time.

Windows 7 removes this restriction – multiple volumes can now be defragmented in parallel with no more waiting for one volume to be defragmented before initiating the same operation on some other volume! The screen shot below shows how defragmentation can be concurrently scheduled on multiple volumes:


Among the other changes under the hood in Windows 7 are the following:

  • Defragmentation in Windows 7 is more comprehensive – many files that could not be re-located in Windows Vista or earlier versions can now be optimally re-placed. In particular, a lot of work was done to make various NTFS metadata files movable. This ability to relocate NTFS metadata files also benefits volume shrink, since it enables the system to pack all files and file system metadata more closely and free up space “at the end” which can be reclaimed if required.
  • If solid-state media is detected, Windows disables defragmentation on that disk. The physical nature of solid-state media is such that defragmentation is not needed and in fact, could decrease overall media lifetime in certain cases.
  • By default, defragmentation is disabled on Windows Server 2008 R2 (the Windows 7 server release). Given the variability of server workloads, defragmentation should be enabled and scheduled only by an administrator who understands those workloads.
Best practices for using defragmentation in Windows 7 are simple – you do not need to do anything! Defragmentation is scheduled to automatically run periodically and in the background with minimal impact to foreground activity. This ensures that data on your hard disk drives is efficiently placed so the system can provide optimal responsiveness and I can continue to enjoy glitch free listening to the Eagles :).
Rajeev and Matt

tomado de:Engineering Windows 7
 
No deberian usar mejor un FS que no se fragmente?

No.

Un FS que no se fragmente (como EXT3) esta optimizado para lectura y no para escritura por lo cual es útil en algunas configuraciones de servidor que son particularmente intensas en lectura y muy poco en escritura.


EXT3 es muy pesado y si se fragmenta salvo que en tiempo de escritura automáticamente re localiza bloques o archivos enteros, haciendo muy costosa la escritura lo cual para un usuario casero es muy perjudicial.

Ese tipo de configuraciones no son recomendadas para un usuario promedio, al menos no con las tecnologias actuales. Los principales cuellos de botella en un sistema se deben a las I/O en dispositivos de disco.
 
Me parecería más informativo colocar un resumen de la noticia en español con los apartes más importantes y colocar a lo último el enlace del sitio en inglés. Me parece innecesario y contraproducente colocar ese artículo tan extenso y en inglés para algo que puede resumirse a las conclusiones.
 
Me parecería más informativo colocar un resumen de la noticia en español con los apartes más importantes y colocar a lo último el enlace del sitio en inglés. Me parece innecesario y contraproducente colocar ese artículo tan extenso y en inglés para algo que puede resumirse a las conclusiones.


Claro pero corrijo con lo siguiente:

1- no se peuden colocar resumen de la historia porque aca no se pueden publicar articulos propios, ya lo he intentado y no son bienvenidos.
2- lo dices tu porque no sabes ingles o no te gusta leer en ingles
3- si ese es el caso la solucion no es pedir todo en español, la solucion es aprender ingles, la tecnologia evoluciona dia a dia y las publicaciones de calidad vienen casi siempre en ingles... el que no sabe ingles... se lo lleva la corriente.
 
Interesante, a ud le parece más importante poner a cientos de laneros a aprender inglés para leer una noticia antes que tener la intención de informarlos. No es que me parezca mal su interés porque los demás se instruyan en inglés, pero dada la inmediatez del propósito informativo (que es lo que se pretende cuando se postean noticias) no es consecuente poner como obstáculo el idioma. Pero igual, es mi opinión, ud verá si la toma en cuenta.

Yo no tengo intención de informar a los que no saben ingles, solo a los que saben o a los que al menos lo leen.

Ellos ya están en desventaja competitiva.
 
una pregunta, basado en lo de arriba, podria considerarse como desfragmentar un dico, borrar la informacion en el y volver a escribirla toda?, por ejemplo copiar toda la informacion de un disco o una cantidad considerable y luego volverla a copiar, en teoria podria evitar la fragmentacion no?
 
Este man tras de picao es pretencioso.

Tengo una pregunta, con los SSD no se ve considerablemente reducido el tiempo de escritura disminuyendo el gap que existe entre la velocidad de computo del procesador y el tiempo de escritura del HDD?...creo que seria bueno que usaran un FS que no se fragmente y asi evitarse esos problemas, que definitivamente se ven reflejados en el desempeño de un PC.
 
Este man tras de picao es pretencioso.
:perro:
Tengo una pregunta, con los SSD no se ve considerablemente reducido el tiempo de escritura disminuyendo el gap que existe entre la velocidad de computo del procesador y el tiempo de escritura del HDD?

por supuesto que SI, por eso desde Windows Vista microsoft incorporo soporte para discos SSD e hibridos.

...creo que seria bueno que usaran un FS que no se fragmente y asi evitarse esos problemas, que definitivamente se ven reflejados en el desempeño de un PC.
Esto ya lo conteste, tras de criticon no lees... (jajaja es broma cojela suave)
mira:

No.

Un FS que no se fragmente (como EXT3) esta optimizado para lectura y no para escritura por lo cual es útil en algunas configuraciones de servidor que son particularmente intensas en lectura y muy poco en escritura.


EXT3 es muy pesado y si se fragmenta salvo que en tiempo de escritura automáticamente re localiza bloques o archivos enteros, haciendo muy costosa la escritura lo cual para un usuario casero es muy perjudicial.

Ese tipo de configuraciones no son recomendadas para un usuario promedio, al menos no con las tecnologias actuales. Los principales cuellos de botella en un sistema se deben a las I/O en dispositivos de disco.
 
no veo porq algunos se ven ofendidos al ver una noticia en ingles, tampoco entiendo el pedido de un resumen, es algo que yo aborresco, el resumen a una noticia, donde siempre ahi el riesgo de emitir comentarios propios de la persona q lo resume, el resumen siempre debe ser hecho por uno mismo, y ademas no creo q sean tan vagos pare leer...
 
PARA TODOS!!

Las noticias y cualquier contenido que este ingles en LANeros.com estan permitidas:
Claro pero corrijo con lo siguiente:

1- no se peuden colocar resumen de la historia porque aca no se pueden publicar articulos propios, ya lo he intentado y no son bienvenidos.
2- lo dices tu porque no sabes ingles o no te gusta leer en ingles
3- si ese es el caso la solucion no es pedir todo en español, la solucion es aprender ingles, la tecnologia evoluciona dia a dia y las publicaciones de calidad vienen casi siempre en ingles... el que no sabe ingles... se lo lleva la corriente.
En que momento se han rechazado articulos que sean redactados por los usuarios?? Usted de donde saco eso, cuando en el sitio se acepta el copy/paste por facilidad y capacidad de informacion, pero es mil veces mejor que la gente escribiera su propio contenido enlazado a la fuente de donde lo tomo. Y yo lo he hecho pocas veces, solo que hay que sacarle tiempo y no siempre hay.

Ahora, para un articulo tan exageradamente largo solo te tengo que decir 2 cosas:
1. Si seria bueno resumirlo: Con la parte del comienzo en cursiva y las imagenes del final basta... el articulo completo se puede enlazar y no porner esa chorrera ahi tan impresionante.
2. No hay problema en que este en ingles.​
 
Me parece que algunas cosas se siguen haciendo al revés. Para qué dedicar un esfuerzo grande en técnicas de desfragmentación cuando el futuro son los discos SSD y en estos discos este problema desaparece?
 
Me parece que algunas cosas se siguen haciendo al revés. Para qué dedicar un esfuerzo grande en técnicas de desfragmentación cuando el futuro son los discos SSD y en estos discos este problema desaparece?
Entonces porque los SSD son el futuro, que dejen de hacer herramientas para el 99% de pc's que trabajan y seguiran trabajando por bastante tiempo con discos duros??
Ademas... para que las SSD lleguen a los servidores tambien falta raaaatooo, y ahi es donde mas aplica la importancia de una herramienta de esas.
 
Entiendo ese punto pero dudo mucho que Windows 7 vaya orientado a servidores. y en los PCs de casa o de mediana empresa el problema de la fragmentación de archivos nunca ha sido un dolor de cabeza importante.
 
Entiendo ese punto pero dudo mucho que Windows 7 vaya orientado a servidores. y en los PCs de casa o de mediana empresa el problema de la fragmentación de archivos nunca ha sido un dolor de cabeza importante.
Pues en mi caso personal la fragmentacion de archivos si me llevo el pc del carajo mas de una vez y me redujo el rendimiento muchisimo.

Y pues, pa los servers, seguro esa misma herramienta la incluyen en su version de Windows Server xxx que vaya a salir... o al menos eso supongo. Casi siempre incluyen mejoras buenas en los dos.
 
Aclaro que lo que yo hice fue aportar sugerencias y dar mi opinion, nunca dije que no se podía hacer esto o lo otro. No voté negativamente la noticia, de hecho me parece muy valiosa, pero su contenido pudo hacerse más conciso y al traducirlo se pudo haber difundido entre más internautas. Como ven sólo fueron sugerencias.
 
Este man tras de picao es pretencioso.

Tengo una pregunta, con los SSD no se ve considerablemente reducido el tiempo de escritura disminuyendo el gap que existe entre la velocidad de computo del procesador y el tiempo de escritura del HDD?...

Tambien le estan trabajando en mejorar el desempeño del Windows con los SSD

Windows Vista SP1 vs Windows 7 Beta

w7vistassdperf-test01.png


Sobre el ext3 ya creo que esta mandado a recoger :p, ya esta listo su sucesor el ext4, en el que han corregido algunas de las desventajas del ext3. Ahora las miradas estan puestas en como se desarrolla el btrfs de Oracle que promete estar al nivel del ZFS de Solaris.

Asi va el ext4 frente a otros FS:

phoronix_benchmark.png
 
Me parece que algunas cosas se siguen haciendo al revés. Para qué dedicar un esfuerzo grande en técnicas de desfragmentación cuando el futuro son los discos SSD y en estos discos este problema desaparece?

Porque el 99% de los usuarios aun no tiene esa tecnologia y porque actualmente esa tecnologia ofrece muy poco tiempo de vida en comparacion con los discos magneticos, asi que faciulmente puede llegar a ser reemplazada por otra.
Sin embargo Windwos vista y en especial Windwso 7 tiene mejoras impresionantes respecto a utilizacion de esa clase de dispositivos.
 
Estado
Cerrado para nuevas respuestas.

Los últimos mensajes

Los últimos temas