209k views
0 votes
Write a text analyzer that reads a file and outputs statistics about that file. It should output the word frequencies of all words in the file, sorted by the most frequently used word. The output should be a set of pairs, each pair containing a word and how many times it occurred in the file.

User SHernandez
by
7.8k points

1 Answer

6 votes

Answer:

from collections import Counter

text_holder = [ ]

with open( " file_name", "r") as file:

while file:

reader = file.readlines()

line_split = reader.split(" ")

for word in line_split:

text_holder.append( word )

file.close()

counter = Counter( text_holder )

for key, value in counter.items():

pair = [ key, value]

print( set( pair ) )

Step-by-step explanation:

The python source code above uses an imported python module called Counter from the collections package to convert the list " text_holder" into a counted dictionary, with each unique word from the text file as the key and it frequency as the value.

User Zarek
by
8.3k points