This work presents an alternative metric for evaluating the quality of solar forecasting models. Some conventional approaches use quantities such as the root-mean-square-error (RMSE) and/or correlation coefficients to evaluate model quality. The direct use of statistical quantities to assign forecasting quality can be misleading because these metrics do not convey a measure of the variability of the time-series for the solar irradiance data. In contrast, the quality metric proposed here, which is defined as the ratio of solar uncertainty to solar variability, compares the forecasting error with the solar variability directly. By making the forecasting error to variability comparisons for different time windows, we show that this ratio is essentially a statistical invariant for each forecast model employed, i.e., the ratio is preserved for widely different time horizons when the same time averaging periods are used, and therefore provides a robust way to compare solar forecasting skills. We employ the proposed metric to evaluate two new forecasting models proposed here, and compare their performances with a persistence model.