The answer to this question is actually more profound than one might expect. That is, the definition of probability can be difficult to state precisely and may vary based on the application - even in the context of a rigorous treatment of probability and statistics.
If you're willing to accept the canonical relative frequency interpretation, as described by many introductory textbooks in statistics, then a probabilistic forecast - like the 30% chance of rain you posit - should verify with the same frequency as the probability given in the forecast. In simple terms, this means your 30% probability of rain forecast should correspond to an outcome of rain roughly 30% of the time when evaluated over a sufficiently large set of analogous forecasts. One could also debate what constitutes an "analogous" forecast scenario, from which the long-timescale relative frequency will be evaluated, but that is beyond the scope of our current discussion.
In short, over a single stochastic trial, the quality of a probabilistic forecast of 30% is difficult to assess. That 30% may be appropriate for the forecast scenario, and yet appear - at least superficially - to be inconsistent with the actual outcome. This does not mean that the forecast lacked quality. A larger sample of forecast/verification data would be needed for that.
For the lay consumer, you should interpret that 30% forecast as the relative number of times rain will occur if this forecast scenario were hypothetically repeated many times - much in the same way we think of coin tosses and the probability of obtaining the outcomes "heads" or "tails." In the short run, you can get clumps of heads or tails that don't match the expected value of the random variable. In the long run, however, the relative frequency interpretation becomes quite intuitive and, indeed, useful.