Reasoning a computation based on problem and symantics
While it is important that our code be correct, it is equally important to review our work from efficiency standpoint. The number of steps that it takes for a function to compute its final value depends on the order of evaluation. Apparently, compiler can do a lot of optimization behind the scenes, and I don't know the details of that process. What we can do as programmers is see if we can optimize our function definitions using the knowledge of language symantics and something about the problem itself. While the example below is trivial, it is good to develop a mindset where we think of how our definitions get evaluated.
Check for a leap year
Since a leap year is the one that is evenly divisible by 4, except every year that is evenly divisible by 100 unless the year is also evenly divisible by 400, we can write the following definition:
isLeapYear :: Int -> Bool
isLeapYear year =
year `rem` 400 == 0 ||
(year `rem` 100 /= 0 && year `rem` 4 == 0)
First we note that since the main expression is an or
expression of the form e1 || e2
, if e1 evaluates to False
(often), e2 has to be evaluated to get final value of the entire expression. We can rewrite our expressions in terms of and
as follows.
Knowing the symantics of e1 && e2
, if e1 evaluates to false (often), e2 will not be evaluated and the entire expression evaluates to False
. Also, within the or
expression of e2 now, the first sub-expression (not divisible by 100) is more likely to be True
in which case the or
expression will evaluate to True
without evaluating the second sub-expression (divisible by 400).
isLeapYear2 :: Int -> Bool
isLeapYear2 year =
year `rem` 4 == 0 &&
(year `rem` 100 /= 0 || year `rem` 400 == 0)