Before I respond to the main idea I wanted to write about for this chapter, I'd like to quickly write about my reaction to the idea of a GUI's mental model of a user. I thought it was such an unconventional and interesting framework for viewing interfaces! We tend to view objects from the perspective of a human, and we seldom think in the other way around — how would objects think about and perceive humans if they could think? I think using this framework not only can remind us to be humble (don't center everything around humans), but also put more emphasis on how we might balance the use of all kinds of input to an interface.

The main idea I'd like to discuss is the difference between acoustic instrument interface design and computer instrument interface design. The beginning of this chapter highlights a major difference between the two types of designs — acoustic instrument usually has form follow physics or function, whereas computer music often has form decoupled from function. This difference in the design is due to the inherent difference between acoustic and computer sounds. Acoustic instruments, by definition, make use of physical rules and principles to produce vibrations that become sound in our ears. Computer music, on the other hand, generate sound by mimicking sounds waves that could be produced by acoustic instruments as well as producing other types of waveforms, and emits the waves from speakers. Therefore, the forms that computers can be in are much less limited by function and there's much more freedom in designing an interface for a computer tool or toy that makes sound. Surely, no one says that you cannot produce a computer instrument that looks exactly like an acoustic instrument, but that seems like a waste because an acoustic instrument already exists and probably produces better, more natural sound. I think this is why electronic pianos often come with more features than just keys that can be pressed to produce piano-like musical notes. There're often features like changing the timbre to that of another instrument and recording a sequence of notes that you played. Still, those are pretty basic features that aren't quite "smart" in terms of the new elements or improvements that the digital version brings to the table. Perry Cook's principle that "copying an instrument is DUMB, leveraging expert technique is SMART", and the computer instrument he made from an acoustic accordion, demonstrate perfectly how some expertise in the acoustic version of the instrument is often required to create brilliant designs of computer instrument interfaces. The "improved accordion" geniously integrates computer input, which has the capacity to differentiate small details like pressure and take a large amount of input at once or over time, into the physical interface that is the keyboard. Indeed, it does not copy the accordion but instead enhances it, and augments it, by leveraging computers.