The main answer to your question is that down sampling involves compressing the discrete-time signal by removing some of its samples, while up sampling involves expanding the signal by inserting zeros in between the original samples.
In the first part of the question, you are asked to write a MATLAB script to down sample the signal x[n] by a factor of m. This can be done by selecting every mth sample of the signal. The resulting down-sampled signal is denoted as y1[n]. By plotting x[n] and y1[n], you can observe the effect of down sampling. The number of samples in y1[n] will be reduced compared to x[n], resulting in a compressed version of the signal.
In the second part of the question, you are asked to perform up sampling on x[n] by a factor of m. This can be done by inserting m-1 zeros between each pair of original samples in x[n], creating the signal y2[n]. For even values of n, y2[n] will be equal to x[n/m], while for odd values of n, y2[n] will be zero. By plotting y2[n], you can observe the expansion of the signal compared to x[n].
Overall, down sampling reduces the number of samples in a signal, while up sampling increases the number of samples by inserting zeros.