Discussion:
speaking of optimisation ...
(too old to reply)
trijezdci
2015-10-08 07:17:25 UTC
Permalink
I'd like to see the "what-happened" thread retired because the direction it has taken has nothing to do with its headline. One of the topics that seems to be of interest was optimisation, so ...

Apart from trivial optimisations to avoid unnecessary register loads, which is something many modern compilers do, there is one important aspect in connection with optimisation that would have a very significant impact but most languages and compilers have failed to make any headway whatsoever:

How do you counter the widespread culture of premature optimisation amongst practitioners?

Many people may not even recognise premature optimisation as a problem. We believe it is a huge problem that contributes very significantly to the dismal status of software. Premature optimisation directly runs counter to correctness, reliability and safety of software. But even celebrity hackers such as RMS are having a hard time to convince the lemmings that premature optimisation is bad.

The underlying root causes are attitude and lack of education, amplified by absence of reassurance.

Attitude: Too many practitioners believe that performance should always have unquestioned priority over correctness, reliability and safety.

Lack of education: Too many practitioners believe that constructs they should use are bad for performance even when this is not actually the case. They also do not seem to realise that premature optimisation often leads to worse performance.

Absence of reassurance: When languages and compilers do not provide the right kind of performance guarantees, practitioners with the above mentioned attitude and lack of education are most likely to engage in premature optimisation.

It seems to me that the latter represents opportunity for improvement.

I want to give two examples for performance guarantees by languages and compilers:

(1) tail call optimisation in functional languages

Practitioners of functional languages have no second thoughts about using recursion. They trust their tools to eliminate tail calls, and even generate iterative output for recursive input when it makes sense. They are far less likely to engage in premature optimisation.

Most functional languages provide a performance guarantee by making tail call optimisation mandatory.

(2) inline functions versus macros in GNU C

The documentation of the GNU C compiler has an entire chapter telling practitioners that function macros are evil and should be avoided, that GNU C's inline functions are better readable and just as efficient, yet, the culture of using macros persists.

Part of that may have to do with the wish for portability. The C standard does not make function inlining mandatory, it is only a suggestion to the compiler, not a mandate. Thus when using inline functions in place of function macros, the performance guarantee only applies to GNU C and the code would need to be edited when using another compiler because the inline attribute has a parameter: __inline__(always).

If C as a language would give an equivalent performance guarantee combined with education this would probably make a significant difference. A performance guarantee by the GNU C compiler alone, even with RMS speaking out against function macros, does not seem to be sufficient to change the culture.

Of course there is a good reason why the inline attribute is only a suggestion and not a mandate: Except for very trivial cases, the compiler always knows better when inlining makes sense and when it does not. If there was a mandate to inline across the board, it is very likely that a culture of "always inline everything" would develop.

IOW, lack of confidence in the compiler is a factor here: I better use a macro, just in case the compiler won't inline it. If the experience with functional languages and TCO is any indication, then this confidence can be developed.

Notably, the performance guarantee in function languages to eliminate tail call recursion is a smart guarantee. It doesn't say the compiler will eliminate all recursion, but only in a very specific situation: in tail call position.

IOW, performance guarantees will have to be a little smarter than just on versus off, or maybe versus always.

Simple scenarios could be given where inlining can be guaranteed.

One such scenario is a function that returns the value of a hidden variable.

Modula-2 practitioners do not need to be told that using hidden variables by placing their declarations into an implementation part and providing a function in the definition part to return its value is proper and safe design. I trust that most Modulans will actually stick to doing so in practise as well.

However, it is mind boggling how many computer science graduates out there do not understand the concept of data encapsulation aka information hiding. They know the terminology and concept, but they don't apply it consistently. There are just too many cases where people grant themselves exceptions when they might not and will not need to encapsulate or bypass encapsulation.

Data encapsulation is like being pregnant though. There is no such thing as "a little pregnant". One either uses encapsulation or one doesn't. I have asked a great many folks why they bypass encapsulation in particular code examples. The reasons given were almost always concern for performance, in some cases laziness.

To tackle laziness is very easy. Wirth had already done that in PIM where he recommended that variable declarations should either not be permitted within definition parts at all, or if they are permitted, then the variables should only be exported read-only.

Unfortunately though, he didn't implement his own recommendation in his own compiler and to my knowledge no other compiler ever followed that recommendation either. In M2R10 we have made this recommendation mandatory.

To tackle concern for performance is also fairly straightforward. An inline guarantee for variable accessor functions could be made in the language. Thus a function of the form ...

PROCEDURE foo ( ) : Foo <*INLINE*>;
BEGIN
RETURN foo (* hidden variable *)
END foo;

... should always be inlined, the language specification should make it mandatory.

The same guarantee could be given for mutator procedures. A procedure of the form ...

PROCEDURE setFoo ( value : Foo ) <*INLINE*>;
BEGIN
foo := value
END setFoo;

... should always be inlined, the language specification should make this mandatory, too.

There are some other common simple scenarios where inlining should be guaranteed. Outside of such guaranteed scenarios, inlining must remain a suggestion to the compiler, but for the select scenarios a suggestion is not good enough to change the habits of practitioners.

Smart performance guarantees of this kind can go a long way to improve both performance and reliability. Practitioners who stick to proper engineering practise would no longer be penalised in circumstances where doing the right thing will indeed lead to lesser performance and those who will sacrifice all else to performance no longer have reason nor motivation to engage in premature performance.

In M2R10 we made tail call optimisation mandatory, we made Wirth's recommendation of exporting global variables always read-only mandatory and we made inlining for getter functions and setter procedures mandatory when requested.

I am sure there are other simple optimisations of this kind that if guaranteed by the language specification will give practitioners the confidence to do the right thing and abstain from premature optimisation.
Martin Brown
2015-10-08 08:52:03 UTC
Permalink
Post by trijezdci
One of the topics that seems to be of interest was optimisation, so ...
Apart from trivial optimisations to avoid unnecessary register loads,
which is something many modern compilers do, there is one important aspect
in connection with optimisation that would have a very significant impact
but most languages and compilers have failed to make any headway
How do you counter the widespread culture of premature optimisation amongst practitioners?
Show them that your compiler will generate pretty much the same
optimised code for all loop constructs and move invariants outside loops
so that they don't have to bother. These days a really good optimising
compiler can make use of x86 profiling information to determine exactly
where the true bottlenecks actually are.

Initially I'd settle for just keeping track of register contents.

If the compiler can recognise situations where it can use SIMD
instructions or other hardware acceleration so much the better but these
are not a priority in implementing the language.

There is no point in putting effort anywhere other than the 20% of code
(often much less) where the most execution time is spent.

Most people optimise things that are supremely irrelevant :(

It is an interesting exercise on a given platform to compile a simple
vector dot product returning one scalar and see what each of the
available constructs result in. Worth running on a range of x86 CPUs
too. It is curious how the different assembler tricks run at quite
different speeds on various steps of Pentium and i5, i7. The optimum
choice of routine is target CPU dependent.

Challenge the wannabes to beat the compiler by using assembler and if it
is a half decent optimiser they will quickly get the message. It is hard
to avoid accidental pipeline stalls for an inexperienced human whereas
the optimising compiler can be aware of all the details.
Post by trijezdci
Many people may not even recognise premature optimisation as a problem.
We believe it is a huge problem that contributes very significantly
to the dismal
Post by trijezdci
status of software. Premature optimisation directly runs counter to
correctness,
Post by trijezdci
reliability and safety of software. But even celebrity hackers such
as RMS are
Post by trijezdci
having a hard time to convince the lemmings that premature
optimisation is bad.
Post by trijezdci
The underlying root causes are attitude and lack of education, amplified by absence of reassurance.
Attitude: Too many practitioners believe that performance should always have unquestioned
priority over correctness, reliability and safety.
C has encouraged this sort of micro management (and in the past it was
sometimes necessary before optimisers became smart).

I have been known to use inline assembler in Modula2 or the Ugh!
CODE(hex values) in Logitech. Please don't ever allow CODE again!
Post by trijezdci
Lack of education: Too many practitioners believe that constructs they should use are
bad for performance even when this is not actually the case. They
also do not seem to
Post by trijezdci
realise that premature optimisation often leads to worse performance.
It also makes the code harder to get right.
Post by trijezdci
Absence of reassurance: When languages and compilers do not provide the right kind of
performance guarantees, practitioners with the above mentioned
attitude and lack of
Post by trijezdci
education are most likely to engage in premature optimisation.
It seems to me that the latter represents opportunity for improvement.
Simple scenarios could be given where inlining can be guaranteed.
One such scenario is a function that returns the value of a hidden variable.
Modula-2 practitioners do not need to be told that using hidden variables by placing their
declarations into an implementation part and providing a function in the definition part to
return its value is proper and safe design.
I trust that most Modulans will actually stick to doing so in
practise as well.

Don't bet on it. I have seen plenty of production code that didn't.
Post by trijezdci
However, it is mind boggling how many computer science graduates out there do not understand
the concept of data encapsulation aka information hiding. They know
the terminology and concept,
Post by trijezdci
but they don't apply it consistently. There are just too many cases
where people grant themselves
Post by trijezdci
exceptions when they might not and will not need to encapsulate or
bypass encapsulation.

One way out would be to alter the behaviour of variable declarations in
DEF modules so that they can never be assigned to externally without
using a private function setFoo is a part of the module.
(I should read ahead before typing)

IOW to the outside world the module variables in DEF are effectively
CONST foo rather than VAR foo and cannot be assigned to.


I also want this behaviour for parameters passed to PROCEDUREs with
CONST meaning it can be passed by reference but never altered.
I think from the Wiki and examples you have also done this.
Post by trijezdci
Data encapsulation is like being pregnant though. There is no such thing as "a little pregnant".
One either uses encapsulation or one doesn't. I have asked a great
many folks why they bypass
Post by trijezdci
encapsulation in particular code examples. The reasons given were
almost always concern for
Post by trijezdci
performance, in some cases laziness.
Grin. I have done it for both these reasons. If you are in a hurry and
need it now with the intention of only ever using it once... Then you
find it is useful and never get a ROUNDTUIT so it grows like topsy.
Post by trijezdci
To tackle laziness is very easy. Wirth had already done that in PIM where he recommended
that variable declarations should either not be permitted within
definition parts at all,
Post by trijezdci
or if they are permitted, then the variables should only be exported
read-only.

Yes. I agree.
Post by trijezdci
Unfortunately though, he didn't implement his own recommendation in his own compiler and
to my knowledge no other compiler ever followed that recommendation
either.
Post by trijezdci
In M2R10 we have made this recommendation mandatory.
Excellent!
Post by trijezdci
To tackle concern for performance is also fairly straightforward.
An inline guarantee for variable accessor functions could be made in
the language. Thus a function of the form ...
Post by trijezdci
PROCEDURE foo ( ) : Foo <*INLINE*>;
BEGIN
RETURN foo (* hidden variable *)
END foo;
... should always be inlined, the language specification should make it mandatory.
The same guarantee could be given for mutator procedures. A procedure of the form ...
PROCEDURE setFoo ( value : Foo ) <*INLINE*>;
BEGIN
foo := value
END setFoo;
... should always be inlined, the language specification should make this mandatory, too.
There are some other common simple scenarios where inlining should be guaranteed.
Inlining should be always be guaranteed if the codesize would be reduced
by doing it (rare). It might be handy to be able to suggest to the
compiler the % size gain you are prepared to tolerate for speed.
Post by trijezdci
Outside of such guaranteed scenarios, inlining must remain a suggestion to the compiler,
but for the select scenarios a suggestion is not good enough to
change the habits of practitioners.
Post by trijezdci
Smart performance guarantees of this kind can go a long way to improve both performance and reliability.
In M2R10 we made tail call optimisation mandatory, we made Wirth's recommendation of exporting
global variables always read-only mandatory and we made inlining for
getter functions and
Post by trijezdci
setter procedures mandatory when requested.
All very good stuff. I find almost nothing to disagree with in what you
propose.

One a purely practical and impure front I would like to have a pragma to
allow slightly dangerous shorthands which I am fairly sure you will not
like as a suggestion but please have a think about it.

In the pure form M2R10 will require type conversions everywhere in a
mixed expression. I think it would be advantageous in the real world to
have a pragma to allow the situation where if all the rValues can be
exactly represented in the lValue they are treated as if promoted to the
lValue before the computation starts. I know this could result in
potentially different results but it makes mathematical expressions
involving a loop index easier to read.

Concrete example where the two approaches would differ markedly.
I: CARDINAL;
x: ARRAY [0..70000] OF LONGREAL;

FOR I:=0 TO 70000
x[I] := I*I;
END;

In strict M2 this will fail for integer overflow when I=65536 with a
promote to lValue pragma it will probably do what the author intended.

I realise this will violate your Centrigrade Fahrenheit example since
they will both be ALIAS LONGREAL under this treatment.

I did have a browse through the language spec and have a couple of minor
comments. Does WRITEF really need at least one extra parameter?

WRITEF("Hello World");

looks better and cleaner to me than

WRITEF("Hello %s", "World");

Is there an equivalent FORMAT primitive somewhere that I missed?
(ie same syntax as WRITEF but result returned in a string)

Probably teaching granny to suck eggs here but have you seen the now
rather old papers by the XDS compiler developers on using dataflow
analysis of the compiler tree to catch any latent faults caused by
expressions that include uninitialised variables at compile time?
(ie there is at least one path where they are not assigned a value)

It catches a surprising number of latent faults in apparently good
production code. The faults typically lurking in obscure error recovery
code that was never really properly tested during development.

I'd also argue in favour of a compiler scoring procedures within a
module with some metrics like length and CCI which are a good proxy
guide to how likely the code is to be bug free.

Another tool which Modula lends itself to is an overall analyser for who
imports what from who so that you can see clearly where things are not
perhaps grouped as cohesively as they might have been.
Post by trijezdci
I am sure there are other simple optimisations of this kind that if guaranteed by the language
specification will give practitioners the confidence to do the right
thing and
Post by trijezdci
abstain from premature optimisation.
Don't hold your breath. I fear it is too deeply ingrained in the
culture. People are still teaching CS who grew up with compilers that
would generate truly awful slow code from otherwise nice languages.
--
Regards,
Martin Brown
trijezdci
2015-10-08 12:15:46 UTC
Permalink
Post by Martin Brown
There is no point in putting effort anywhere other than the 20% of code
(often much less) where the most execution time is spent.
Agreed.
Post by Martin Brown
Most people optimise things that are supremely irrelevant :(
Haha, my observation, too.
Post by Martin Brown
Post by trijezdci
Attitude: Too many practitioners believe that performance should always have unquestioned
priority over correctness, reliability and safety.
C has encouraged this sort of micro management (and in the past it was
sometimes necessary before optimisers became smart).
The trouble now is that the mindset persists.
Post by Martin Brown
I have been known to use inline assembler in Modula2 or the Ugh!
CODE(hex values) in Logitech. Please don't ever allow CODE again!
Any assembly support would always be an implementation defined add-on if only because you cannot specify every possible assembly language in your spec. Whoever has a reason to provide such an add-on is not going to be put off just because you say it is forbidden. It is better then to specify a general framework as an option along the lines of "if you must do it, follow these guidelines". This is what we have done. Any assembly support has to go into a pseudo-module ASSEMBLER, which is optional.
Post by Martin Brown
Post by trijezdci
Lack of education: Too many practitioners believe that constructs they should use are
bad for performance even when this is not actually the case. They also do not seem to
realise that premature optimisation often leads to worse performance.
It also makes the code harder to get right.
Indeed.

I may add, harder to read and thus harder to maintain as well.
Post by Martin Brown
Post by trijezdci
Modula-2 practitioners do not need to be told that using hidden variables by placing their
declarations into an implementation part and providing a function in the definition part to
return its value is proper and safe design.
I trust that most Modulans will actually stick to doing so in practise as well.
Don't bet on it. I have seen plenty of production code that didn't.
I realise that, but when compared with C practitioners and other folk, I believe or at least I hope that the compliance ratio is significantly more favourable amongst Modulans.
Post by Martin Brown
One way out would be to alter the behaviour of variable declarations in
DEF modules so that they can never be assigned to externally without
using a private function setFoo is a part of the module.
(I should read ahead before typing)
:-)
Post by Martin Brown
I also want this behaviour for parameters passed to PROCEDUREs with
CONST meaning it can be passed by reference but never altered.
I think from the Wiki and examples you have also done this.
Yes, we have three formal parameter attributes: CONST, VAR and NEW.

A formal CONST parameter may not be an L-value and it may not be passed to any VAR or NEW parameters. If the parameter is a pointer, then the CONST attribute de facto means CONST POINTER TO CONST. Neither the pointer itself nor its target may be an L-value and neither may be passed to any VAR or NEW parameters.

For curiosity, a formal NEW parameter is a VAR parameter with an additional safety check. The value passed in must be NIL or a compile time or runtime error will occur. Instances of immutable types that wouldn't be passable to VAR parameters may be passed to NEW parameters which is useful for initialisation because once the pointer is no longer NIL, it cannot be passed to a NEW parameter again.
Post by Martin Brown
Grin. I have done it for both these reasons. If you are in a hurry and
need it now with the intention of only ever using it once... Then you
find it is useful and never get a ROUNDTUIT so it grows like topsy.
Well, I don't want to encourage this kind of thing, but we do have a pragma that can remind you to fix the trouble you are causing yourself: the DEPRECATED pragma.

VAR foo : Foo <*DEPRECATED*>;

This will make the compiler nag you every time you are using it.
Post by Martin Brown
Post by trijezdci
There are some other common simple scenarios where inlining should be guaranteed.
Inlining should be always be guaranteed if the codesize would be reduced
by doing it (rare). It might be handy to be able to suggest to the
compiler the % size gain you are prepared to tolerate for speed.
The problem here is mostly perception by practitioners, not what compilers are capable of doing.

Most compilers are pretty good at inlining stuff. There is really no good reason not to trust the ability of the compiler to judge whether something shouldn't be inlined.

But if the documentation doesn't say that a certain scenario will be inlined as a matter of guarantee, then there will be plenty of idiots who try to outsmart the compiler and do whatever nonsense they shouldn't be doing, all in the belief that they are optimising their code.

For a language like Modula-2 which doesn't support user definable macros, this can be a problem because prospective adopters will ask "do I get user defined macros?" and "does it give inline guarantees?" and if the answer is "No" in both cases, they may never even give it a try.

Thus, my point is first and foremost that the guarantee we provide in M2R10 is there to instil user confidence: "Yes, you can encapsulate your data and provide access only via getters and setters. We promise it will never hurt performance."
Post by Martin Brown
One a purely practical and impure front I would like to have a pragma to
allow slightly dangerous shorthands which I am fairly sure you will not
like as a suggestion but please have a think about it.
In the pure form M2R10 will require type conversions everywhere in a
mixed expression. I think it would be advantageous in the real world to
have a pragma to allow the situation where if all the rValues can be
exactly represented in the lValue they are treated as if promoted to the
lValue before the computation starts. I know this could result in
potentially different results but it makes mathematical expressions
involving a loop index easier to read.
Concrete example where the two approaches would differ markedly.
I: CARDINAL;
x: ARRAY [0..70000] OF LONGREAL;
FOR I:=0 TO 70000
x[I] := I*I;
END;
We won't sacrifice safety for the sake of lazy typers, but we made quite a bit of effort to make the FOR loop more readable in other ways. Our FOR loop is a FOR IN loop and your example would be ...

FOR index, value IN array DO
value := (index * index) :: LONGREAL
END;

or if you must use single letter loop variants ...

FOR i, v IN x DO
v := (i * i) :: LONGREAL
END;

If you feel that it is too much of an inconvenience to have to type LONGREAL, then I would rather recommend to define an alias type for it

TYPE LREAL = LONGREAL;

your loop would then become ...

FOR i, v IN x DO
v := (i * i) :: LREAL
END;

which is one character longer than yours (taking the missing DO into account). Of course you could define an alias type LR for LONGREAL, in which case it would be just as short, although I would rather type three characters more and have a self explanatory name for the type.

Anyway, the philosophy of M2R10 is that the source code should show the author's INTENT, it should not leave you guessing what the intent was. This means the design principle is always EXPLICIT, NOT IMPLICIT. This results in better readability, better verifiability and better maintainability. In many cases, the explicit syntax can also serve to supply annotations to an optimising back end which can do more optimisation work when intent is visible.


I did have a browse through the language spec and have a couple of minor
Post by Martin Brown
comments. Does WRITEF really need at least one extra parameter?
WRITEF("Hello World");
looks better and cleaner to me than
WRITEF("Hello %s", "World");
WRITEF is a Wirthian macro. That is to say, it looks like a built in procedure or function but it is a hardwired macro that is replaced by the compiler with a library call. We call them Wirthian macros because according to our research, Wirth was the first to do this with NEW and DISPOSE in classic Modula-2. NEW is replaced by a call to Storage.ALLOCATE and DISPOSE by a call to Storage.DEALLOCATE. Another characteristic of Wirthian macros is that the compiler may fill in parameters that are not passed to the macro, but must be passed to the replacement call.

The pseudo-signature of WRITEF is

PROCEDURE WRITEF ( f : File; fmt : ARRAY OF CHAR; valueList : ARGLIST >0 OF <Type> );

This means that all the passed in values for valueList must be of the same type. No mixing permitted.

When WRITEF is invoked, the compiler looks at the type of the last formal parameter. This determines which library call should be substituted for the macro. For example, in ...

WRITEF(f, "", 1.23, 4.56, 7.89);

the values are real numbers. The macro invocation is then replaced with a call to the respective WRITEF implementation of type REAL, which is in standard library module REALIO.def

PROCEDURE [WRITEF] writeF ( f : File; fmt : ARRAY OF CHAR; values : ARGLIST >0 OF REAL );

the replacement call will be

REALIO.writeF(f, "", 1.23, 4.56, 7.89);

As for the format string, it is important to note that this is for formatting only, not like C's printf where format strings also serve as formal parameter specifiers determining what parameters of what types are to be passed. This is of course unsafe and error prone, especially when the format string is a runtime string, then all bets are off.

Our format string is strictly for formatting only. It controls whether a number should be left or right centered, in which radix it should be rendered, how many significant digits should be shown etc etc. It does not support any pseudo-formatting parameters that indicate type information. The types of actual parameters must strictly follow the formal parameters of the procedure's definition.
Post by Martin Brown
Is there an equivalent FORMAT primitive somewhere that I missed?
(ie same syntax as WRITEF but result returned in a string)
This will be in the respective library. There is no Wirthian macro for it.
Post by Martin Brown
Probably teaching granny to suck eggs here but have you seen the now
rather old papers by the XDS compiler developers on using dataflow
analysis of the compiler tree to catch any latent faults caused by
expressions that include uninitialised variables at compile time?
(ie there is at least one path where they are not assigned a value)
I haven't seen the papers you mention but I have several books on the subject, not least the one I already mentioned in the discussion: Michael L.Scott's PLP, which deals with this in some detail and practical examples. This uses SSA as the basis for the analysis.

We have several language defined pragmas for this kind of thing.

For example pragma OUT, which represents a promise to write to a VAR parameter to ensure that initialisation of variables can be tracked.

Such pragmas only generate warnings but these are what we call promotable warnings, which means the compiler must provide a means to users to treat them as errors. Thus, the question of warning or error is then a policy decision by the developer of the code.
Post by Martin Brown
It catches a surprising number of latent faults in apparently good
production code. The faults typically lurking in obscure error recovery
code that was never really properly tested during development.
Indeed, reporting uninitialised variables is important for correctness.
Post by Martin Brown
I'd also argue in favour of a compiler scoring procedures within a
module with some metrics like length and CCI which are a good proxy
guide to how likely the code is to be bug free.
Another tool which Modula lends itself to is an overall analyser for who
imports what from who so that you can see clearly where things are not
perhaps grouped as cohesively as they might have been.
All good ideas, but let's not put the cart before the horse. These features would be by choice of an implementor, they are not the kind of thing you would mandate in a language specification. And we need to get a working compiler first before we go into the bells and whistles of compiler switches.
Post by Martin Brown
Post by trijezdci
I am sure there are other simple optimisations of this kind that if guaranteed by the language
specification will give practitioners the confidence to do the right thing and
abstain from premature optimisation.
Don't hold your breath. I fear it is too deeply ingrained in the
culture. People are still teaching CS who grew up with compilers that
would generate truly awful slow code from otherwise nice languages.
Confidence isn't built in one day. It will take some time, but you have to work for it.
Gour
2015-10-08 13:02:23 UTC
Permalink
On Thu, 8 Oct 2015 05:15:46 -0700 (PDT)
Post by trijezdci
All good ideas, but let's not put the cart before the horse. These
features would be by choice of an implementor, they are not the kind
of thing you would mandate in a language specification. And we need
to get a working compiler first before we go into the bells and
whistles of compiler switches.
Thanks a lot for very interesting mails you write...however, I'd like to
see if you can write something about what are possibilities for people
to contribute to the R10 project at this moment, iow. something which
goes along with defined Goals on the web site?

Let's hope there will be plenty of time to talk about optimizing and/or
tailoring compiler, but as you confirmed, having bootstrap compiler is
very important milestone to revive interest in (Modula-2) R10.


Sincerely,
Gour
--
A person is considered still further advanced when he regards honest
well-wishers, affectionate benefactors, the neutral, mediators, the
envious, friends and enemies, the pious and the sinners all with an
equal mind.
trijezdci
2015-10-08 12:20:32 UTC
Permalink
Of course I meant

TYPE LREAL = ALIAS OF LONGREAL;
trijezdci
2015-10-08 12:51:40 UTC
Permalink
Post by trijezdci
Of course I meant
TYPE LREAL = ALIAS OF LONGREAL;
Also, to avoid an overflow, the integers should be converted to LONGREAL first before multiplying ...

i :: LONGREAL * i :: LONGREAL

if the length of this is of concern, you can always write a function

PROCEDURE square ( i : INTEGER ) : LONGREAL <*INLINE*>;
BEGIN
RETURN i :: LONGREAL * i :: LONGREAL
END square;

Notice, how the inline guarantee for single statement will make a difference here?!

your loop now becomes ...

FOR i, v IN x DO
v := square(i)
END;

... and there is no performance penalty for calling a function.

And we're back on topic :-)
Pascal J. Bourguignon
2015-10-08 13:43:33 UTC
Permalink
Post by trijezdci
How do you counter the widespread culture of premature optimisation amongst practitioners?
It makes us lose a lot of debugging time.
Post by trijezdci
Many people may not even recognise premature optimisation as a
problem. We believe it is a huge problem that contributes very
significantly to the dismal status of software. Premature optimisation
directly runs counter to correctness, reliability and safety of
software. But even celebrity hackers such as RMS are having a hard
time to convince the lemmings that premature optimisation is bad.
The underlying root causes are attitude and lack of education,
amplified by absence of reassurance.
[…]
Part of that may have to do with the wish for portability. The C
standard does not make function inlining mandatory, it is only a
suggestion to the compiler, not a mandate. […]
Actually, when looking closely at the C standard and for example, the
Common Lisp standard, you realize that basically both language
definitions allow for the same kind of compilation-time and run-time
checks, and generation of safe or unsafe code.

For example, CLHS defines aref with:

aref array &rest subscripts => element
array---an array.
subscripts---a list of valid array indices for the array.
Exceptional Situations: None.

Notice that there are no 'systematic' exceptional situation, and
therefore the only question is that of the validity of the array indices
given.

Elsewhere, the standard defines safe code and unsafe code (depending
basically on how all the components of the executed code were compiled
with the highest safety optimization level or not), and error detection
such as on invalid parameters, is specified to occur only with safe code
(it can still occur with unsafe code, only it's implementation
dependent). The error detection can also occur either at compilation
time or at run-time, depending on the abilities of implementation and
properties of the program.

But the important thing here, is that 'most' CL implementations will still
detect out-of-bounds error even in unsafe code!

With safe code, as specified by the standard:

[***@kuiper :0.0 tmp]$ clall -r '(declaim (optimize (safety 3) (speed 0) (space 0) (debug 0)))' '(aref #(1 2 3) 4)'

Clozure Common Lisp --> NIL
Clozure Common Lisp Array index 4 out of bounds for #(1 2 3) .
CLISP --> NIL
CLISP AREF: index 4 for #(1 2 3) is out of range
CMU Common Lisp --> NIL
CMU Common Lisp Error in function LISP::%ARRAY-ROW-MAJOR-INDEX: Invalid index 4 in #(1 2 3)
ECL --> NIL
ECL In function AREF, the index into the object #(1 2 3). takes a value 4 out of the range (INTEGER 0 2).
SBCL --> No value.
SBCL Index 4 out of bounds for (SIMPLE-VECTOR 3), should be nonnegative and <3.



With unsafe code, as expected by the culture:

[***@kuiper :0.0 tmp]$ clall -r '(declaim (optimize (safety 0) (speed 3) (space 0) (debug 0)))' '(aref #(1 2 3) 4)'

Clozure Common Lisp --> NIL
Clozure Common Lisp Array index 4 out of bounds for #(1 2 3) .
CLISP --> NIL
CLISP AREF: index 4 for #(1 2 3) is out of range
CMU Common Lisp --> NIL
CMU Common Lisp Error in function LISP::%ARRAY-ROW-MAJOR-INDEX: Invalid index 4 in #(1 2 3)
ECL --> NIL
ECL In function AREF, the index into the object #(1 2 3). takes a value 4 out of the range (INTEGER 0 2).
SBCL --> No value.
SBCL Index 4 out of bounds for (SIMPLE-VECTOR 3), should be nonnegative and <3.



In the case of C, the C standard allows for the same behavior. But the
culture of C users and C compiler writers doesn't allow them to leave
run-time bound checking code in.

On the other hand, there are C implementations that will keep those
checks, such as C interpreters, or C compilers targetting controlled
environments (cf. Zeta-C, Vacietis).
Post by trijezdci
If C as a language would give an equivalent performance guarantee
combined with education this would probably make a significant
difference. A performance guarantee by the GNU C compiler alone, even
with RMS speaking out against function macros, does not seem to be
sufficient to change the culture.
With pipelined and heavily memory cached processors like those we have
nowadays, all the bound checks would come for free, since they'd
basically run in parallel in the pipeline without memory hit, and be on
the non-branching side of the branch prediction.
Post by trijezdci
However, it is mind boggling how many computer science graduates out
there do not understand the concept of data encapsulation aka
information hiding.
This was the subject of several lessons and projects when I was student
(in France). Did that change or isn't it the subject in other
countries? But did most (professional) programmers go thru such a
formal education?
Post by trijezdci
Data encapsulation is like being pregnant though. There is no such
thing as "a little pregnant". One either uses encapsulation or one
doesn't. I have asked a great many folks why they bypass encapsulation
in particular code examples. The reasons given were almost always
concern for performance, in some cases laziness.
And the fact that C++ doesn't allow it, requiring all the members to be
put in the header along with the definition of the classes, doesn't
help.
--
__Pascal Bourguignon__ http://www.informatimago.com/
“The factory of the future will have only two employees, a man and a
dog. The man will be there to feed the dog. The dog will be there to
keep the man from touching the equipment.” -- Carl Bass CEO Autodesk
Loading...