phonetics
Language
download PDF
Watch
Edit
To learn more about phonemes, see Phonology. For a method of teaching reading and writing, see Fonika. For other uses, see Phonetics (disambiguation).
Phonetics is an area of linguistics that studies how people create and perceive sounds or, in the case of sign languages, the equivalent aspects of a sign. [1] Phoneticists - linguists who specialize in phonetics - study the physical properties of speech. The field of phonetics is traditionally divided into three sub-disciplines based on research questions, such as how to plan and execute movements to produce speech (articulation phonetics), how different movements affect the properties of the sound produced (acoustic phonetics), or how people transform sound waves into information linguistic (auditory phonetics). Traditionally, the minimum linguistic unit of phonetics is the telephone - the sound of speech in language - which is different from the phonological unit of a phoneme; a phoneme is an abstract categorization of phones.
Phonetics is broadly concerned with two aspects of human speech: production - the way people make sounds - and perception - the way we understand speech. The communicative modality of a language describes the method by which language produces and perceives languages. Languages with an oro-auditory modality, such as English, produce oral speech (using the mouth) and pick up speech auditory (using the ears). Sign languages such as Auslan and ASL have a manual-visual modality, producing speech by hand (using the hands) and perceiving speech visually (using the eyes). ASL and some other sign languages additionally have a hand-held dialect for use in tactile snapshots by deaf-blind speakers, where the signs are produced with the hands and also perceived with the hands.
Language formation consists of several interdependent processes that transform a non-linguistic message into a spoken or signed language signal. Once the message to be linguistically coded has been identified, the speaker must select individual words - known as lexical elements - to present that message through a process called lexical selection. In phonological coding, the mental representation of words is assigned their phonological content in the form of a sequence of phonemes for production. Phonemes are specified for articulation features that denote specific targets, such as a closed mouth or tongue in a specific place. These phonemes are then coordinated into a sequence of muscle commands that can be sent to the muscles, and when these commands are performed correctly, the intended sounds are produced.
These movements disrupt and modify the airflow, resulting in a sound wave. Modification is performed by articulators, with different places and methods of articulation, giving different acoustic results. For example, tack and sack begin with ridge sounds in English but differ in the distance of the tongue from the ridge of the ridge. This difference has a great influence on the airflow and therefore on the sound produced. Likewise, the direction and source of the air flow can affect the sound. The most common airflow mechanism is pulmonary flow - which makes use of the lungs - but the glottis and tongue can also be used to create airstreams.
Language perception is the process by which a linguistic signal is decoded and understood by the listener. To receive speech, a continuous acoustic signal must be converted into discrete linguistic units such as phonemes, morphemes and words. To correctly identify and categorize sounds, listeners prioritize certain aspects of the signal that can reliably distinguish between language categories. While certain clues take precedence over others, many aspects of the signal can contribute to perception. For example, although spoken languages prioritize acoustic information, the McGurk effect shows that visual information is used to distinguish ambiguous information when the acoustic cues are unreliable.
I count on the best