Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is *explicit* better than implicit? #10

Open
szaghi opened this issue Jan 18, 2016 · 66 comments
Open

Is *explicit* better than implicit? #10

szaghi opened this issue Jan 18, 2016 · 66 comments
Labels

Comments

@szaghi
Copy link
Member

szaghi commented Jan 18, 2016

I feel that there is an almost unanimous agreement on that implicit none must be mandatory, but I guess many of you will provide interesting exceptions, so

do you agree to advice to be explicit?

In the case you disagree, please elaborate your idea.

@cmacmackin
Copy link
Collaborator

I can think of no circumstance in which explicit none should be
omitted. Put it at the start of every module, submodule, or program.

On 16-01-18 04:35 PM, Stefano Zaghi wrote:

I feel that there is an almost unanimous agreement on that |implicit
none| must be mandatory, but I guess many of you will provide
interesting /exceptions/, so

do you agree to advice to be explicit?

In the case you disagree, please elaborate your idea.


Reply to this email directly or view it on GitHub
#10.

Chris MacMackin
cmacmackin.github.io http://cmacmackin.github.io

@LadaF
Copy link
Collaborator

LadaF commented Jan 18, 2016

I think implicit none can be omitted for procedures which get implicit none from their host, both internal and module procedures. At least that
is my personal style and I omit it in this case always.

2016-01-18 16:36 GMT+00:00 Chris MacMackin [email protected]:

I can think of no circumstance in which explicit none should be
omitted. Put it at the start of every module, submodule, or program.

On 16-01-18 04:35 PM, Stefano Zaghi wrote:

I feel that there is an almost unanimous agreement on that |implicit
none| must be mandatory, but I guess many of you will provide
interesting /exceptions/, so

do you agree to advice to be explicit?

In the case you disagree, please elaborate your idea.


Reply to this email directly or view it on GitHub
#10.

Chris MacMackin
cmacmackin.github.io http://cmacmackin.github.io


Reply to this email directly or view it on GitHub
#10 (comment)
.

@cmacmackin
Copy link
Collaborator

Agreed. So long as they are within a scope in which implicit none has
been specified. (Hence why I didn't list functions and subroutines as
places where it should be specified.)

On 16-01-18 04:41 PM, LadaF wrote:

I think implicit none can be omitted for procedures which get implicit none from their host, both internal and module procedures. At least that
is my personal style and I omit it in this case always.

2016-01-18 16:36 GMT+00:00 Chris MacMackin [email protected]:

I can think of no circumstance in which explicit none should be
omitted. Put it at the start of every module, submodule, or program.

On 16-01-18 04:35 PM, Stefano Zaghi wrote:

I feel that there is an almost unanimous agreement on that |implicit
none| must be mandatory, but I guess many of you will provide
interesting /exceptions/, so

do you agree to advice to be explicit?

In the case you disagree, please elaborate your idea.


Reply to this email directly or view it on GitHub

#10.

Chris MacMackin
cmacmackin.github.io http://cmacmackin.github.io


Reply to this email directly or view it on GitHub

#10 (comment)
.


Reply to this email directly or view it on GitHub
#10 (comment).

Chris MacMackin
cmacmackin.github.io http://cmacmackin.github.io

@szaghi
Copy link
Member Author

szaghi commented Jan 18, 2016

@LadaF sure, I agree: in those cases it is unnecessary, but my question was, indeed,

have some of you good reasons to use implicit defined variables (as the older of us learned to do with F66/77)?

@LadaF
Copy link
Collaborator

LadaF commented Jan 18, 2016

There are some tricks for poor man's templates where instead of C macros one can use implicit typing and standard include statement.

consider

#define mytype type(abc)
subroutine abc_sub
#include "sub_base-inc-1.f90"
end subroutine
#undef mytype

with "sub_base-inc-1.f90" containing

mytype :: var

versus

subroutine abc_sub
    implicit type(abc) (v)
    include "sub_base-inc-2.f90"
end subroutine

with "sub_base-inc.f90" containing just the usage of variable var. I am not sure if I got the implicit statement completely right, as I never use it.

@nncarlson
Copy link
Collaborator

Very interesting trick @LadaF. Not sure I'd ever use it, but it's interesting nonetheless.

I'd go a step beyond making implicit none mandatory at the start of any module or program. I'd also require that it not appear at the start of any module or internal procedure which get it from their host. Programmers learn by example and the takeaway message by having it there is that it must be necessary somehow, and mislead an unaware programmer.

@szaghi
Copy link
Member Author

szaghi commented Jan 18, 2016

@LadaF thank you very much, I win my bet, I guessed that I can learn something even by implicit! I love to learn something new that I never read, your trick is very appreciated!

@nncarlson I am sorry, but I completely miss your point. Can you elaborate more on not appear at the start of...?

@nncarlson
Copy link
Collaborator

@szaghi, If implicit none is present at the start of a module (and I think it should be required) then it applies to the entire module scope and it is not necessary to put it also at the start of the contained procedures, as others have noted. One can go ahead anyway and add implicit none at the start of contained procedures, but it is redundant and unnecessary, and conveys the false idea that it is required. For those reasons I'd require that implicit none not be declared there, but only at the module/program scope. As I said, programmers learn by example.

@cmacmackin
Copy link
Collaborator

I second requiring it not to be declared at the start of internal/module
procedures.

On 16-01-18 09:50 PM, Neil Carlson wrote:

@szaghi https://github.com/szaghi, If implicit none is present at
the start of a module (and I think it should be required) then it
applies to the entire module scope and it is not necessary to put it
also at the start of the contained procedures, as others have noted.
One can go ahead anyway and add implicit none at the start of
contained procedures, but it is redundant and unnecessary, and conveys
the false idea that it is required. For those reasons I'd require that
implicit none not be declared there, but only at the module/program
scope. As I said, programmers learn by example.


Reply to this email directly or view it on GitHub
#10 (comment).

Chris MacMackin
cmacmackin.github.io http://cmacmackin.github.io

@raullaasner
Copy link
Collaborator

This could be part of a general rule: do not write code that has no effect. The implicit none scope is an example of where such a rule is not self-evident, for beginners at least.

@szaghi
Copy link
Member Author

szaghi commented Jan 18, 2016

@nncarlson and all, sorry, this was so obvious for me that I was confused before reach the second, crucial, argument that @raullaasner clarifies

do not write code that has no effect

Wonderful, great zen! Thank you very much.

@nncarlson
Copy link
Collaborator

@raullaasner is spot on with the general rule. Another example that comes to mind (sorry drifting off-topic) is deallocation of local allocatable variables. Don't bother with explicitly deallocating them before returning to the caller -- it happens automatically. Explicitly deallocating them conveys the false notion to the naive that it is necessary to avoid memory leaks.

@raullaasner
Copy link
Collaborator

@nncarlson If you want to check the deallocation status and catch any error messages, you would need to do that manually.

@nncarlson
Copy link
Collaborator

@raullaasner, that's true of course. I'm curious though who actually checks the deallocation status, and have they ever seen a deallocation failure. But don't answer! -- a topic for a different thread when the time is ripe (make a note @szaghi :-)

@Tobychev
Copy link
Collaborator

The first time I used fortran seriously, I had to introduce a calculation into a larger legacy simulation, and I made some typos. It took me close to a month to figure out why the program was compiling and then returning nonsense.

What is the cost of excessive "implicit none" that can be compared with the cost of leaving it out once in error?

@szaghi
Copy link
Member Author

szaghi commented Jan 19, 2016

@Tobychev
In the past I compulsively added implicit none everywhere just because others had the possibility to take my snippets and use them into old-fashioned files-library where, without module/program encapsulation and/or explicit interfaces, all the procedures live in a blob-ed namespace without interfaces. In this scenario, I was happy of this bad approach just because save my time in eventual subsequent debugging of others codes.

However, when we talk about best practices, I think that the @raullaasner and @nncarlson advice is very important: do not write code that has no effect. If we agree to suggest that implicit none must be added at the start of each module, submodule, program, it should be also good to advice (the eventually new Fortraners) that implit none is not necessary also inside procedures, because it is inherited from host. Obviously, if procedures have no host, put implicit none inside them, but we should not promote old-fashioned files-library, ops... this must go into another discussion, I am taking note @nncarlson :-)

The our point is avoid to promote false statements, i.e. implicit none is needed everywhere: as @nncarlson highlights examples are of paramount relevance for strange people like us.

I agree with you that excessive implicit none has not so high cost, but it is simply not necessary (except for particular scenario, e.g. old-legacy codes with a lot of implicit definitions).

See you soon.

@Tobychev
Copy link
Collaborator

@szaghi
While I agree that avoiding writing code that has no effect is a good goal, it is clearly much more desirable to write code that gives correct results, and efforts to be correct trumps efforts to avoid dead code.

For example: mandating that a select statement always have a default clause is a sensible thing even though it might never be reached because the code always chooses one of the earlier branches. Should we then, following the no-dead-code principle, mandate that you should never use a default statement if you are really sure the code will take one of the earlier branches? I would say this is excessive and dangerous advice considering the cost of retaining it (a bit of code that can be optimised away by the compiler) versus the cost of programmer misjudgement ( unexpected behaviour in rare cases ).

It is also possible to argue that implicit none should never be included by the strict no-dead-code principle since the same effect can be achieved by use of compiler flags, presuming this option exists in most relevant compilers. I for one always use the "implicit none" compiler flag, but include the statements in program code anyway. I've read arguments that this is good practice because people could copy and build your code without your flags, but honestly its mostly as a tribute to that month I spend trying to get my stupid routine to make sense...

Anyway, practice my objection is aimed at the prohibition of implicit none in certain places: I feel this is excessive considering the great cost that can follow from mistakenly leaving it out, and the small cost of its excessive use. I think the style guide should only speak of where it must appear, because there is simply no strong argument against including it excessively.

I also think this thread has now derailed from the issue of where implicit can be used, and is now a debate about the exact style of how it should be forbidden. (Sorry that I helped with that).

@szaghi
Copy link
Member Author

szaghi commented Jan 20, 2016

@Tobychev great! All interesting arguments!

it is clearly much more desirable to write code that gives correct results

I agree.

mandating that a select statement always have a default clause is a sensible thing...

Wonderful! I am taking note, thank you very much!

It is also possible to argue that implicit none should never be included by the strict no-dead-code principle since the same effect can be achieved by use of compiler flags, presuming this option exists in most relevant compilers.

I partially disagree: I do no like to rely on compiler vendors when a standard syntax is available. I vote for a guideline advising to use implicit none statement and eventually to use the compiler option if available, but not the contrary.

practice my objection is aimed at the prohibition of implicit none in certain places: I feel this is excessive considering the great cost that can follow from mistakenly leaving it out, and the small cost of its excessive use. I think the style guide should only speak of where it must appear, because there is simply no strong argument against including it excessively.

I agree, but I think that we should not talk about prohibition neither about costs: @nncarlson and others suggestions should be viewed (IMO) as an advice that

if host has implicit none, the contained procedures do no need it
or as

convey right Fortran features with correct examples

We are concerning that if we advice to insert implicit none everywhere without consciousnesses we could mislead inexperienced Fortran on the wrong conviction that it is necessary. We could agree that implicit none everywhere could be helpful to save us from subtle bugs (in particular circumstances), but we could not agree that it is necessary.

Do not worry to drive me far from this topic, I am learning a lot from such a digression, thank you very much!

Returning on topic, what do you think about something like the guideline I wrote in my guidelines? Is it comprehensive enough to catch the above suggestions? At least, the compiler-option comment is missing, I guess.

See you soon.

@zbeekman
Copy link
Member

Should we then, following the no-dead-code principle, mandate that you should never use a default statement if you are really sure the code will take one of the earlier branches?

I may be in the minority here, or this opinion may be considered "extreme" by some, but I would actually argue yes to this point. Increasing the size of a codebase and the inclusion of unreachable code is a no-no in my book. Yes it may be seen as defensive coding, but if it's actually impossible to enter that branch of the select statement (even with potentially incorrect user input, corrupted data files, or out of memory errors) then it is "dead" code, and exists for no other reason than to give you a warm fuzzy feeling that you have been a diligent defensive programmer. Additionally this also means there is no way to test its correctness.

My philosophy is that you should strive to get 100% test coverage during unit tests, regression tests and integration tests. If you can't devise a way to execute a branch of an if statement, or a case from a select case construct during testing, because it is impossible to reach, even with bad user input etc. then that code should be removed. There is no point having it, and it distracts and bloats code from the important parts. Further more, in my experience, assuming a constant error rate (N_errors/statement) has been a decent assumption. As a corollary, the longer you make your code, the more bugs you will introduce. Of course this is a gross approximation, but once you start adding a lot of "dead" code, there's no way to test if it is actually correct, and it obfuscates the other parts of the code that are executed by making them further apart on the page, etc.

This ties in with my software architecture/implementation guiding principal:

"Don't build it until you need it."

I used to spend a lot of time, and effort thinking about ways that I might want to use a given code in the future, and starting to implement bits and pieces of that to make it "generic" and "future proof." I wasted a lot of time and introduced a lot of bugs. It is much better, IMO to start with a solid object oriented architecture, that is designed to be extensible and generic through use of abstract classes, TBPs etc. but then only build out the concrete implementations that you NEED RIGHT NOW.

It is also possible to argue that implicit none should never be included by the strict no-dead-code principle since the same effect can be achieved by use of compiler flags, presuming this option exists in most relevant compilers. I for one always use the "implicit none" compiler flag, but include the statements in program code anyway. I've read arguments that this is good practice because people could copy and build your code without your flags, but honestly its mostly as a tribute to that month I spend trying to get my stupid routine to make sense...

I would argue that compiler flags are not standard and therefore, passing the compiler flag as an alternative to implicit none does not imply that including implicit none is writing code that has no effect.

Anyway, practice my objection is aimed at the prohibition of implicit none in certain places: I feel this is excessive considering the great cost that can follow from mistakenly leaving it out, and the small cost of its excessive use. I think the style guide should only speak of where it must appear, because there is simply no strong argument against including it excessively.

I think the arguments against including it excessively have been enumerated already: Teaching by example that n00bs need to add it everywhere, increasing the code length, and possibly including it where it is not needed, but forgetting it where it is needed.

However, I concede that treating this too rigorously is probably excessive. I think it is very important to say something like "you must use implicit none at the beginning of every module and program declaration" and then include the motto "Don't write dead, unreachable code or code that has no effect" but explicitly banning implicit none in other contexts seems a bit heavy handed to me.

This also raises a final question in my mind: submodules

I don't know enough about them yet, and have never tried using them, but it is conceivable that numerous submodules may have the ability to be reused and integrated into different parent modules. If this is the case, I think you should be declaring implicit none in the submodule if it is allowed to be compiled separately from the main module, or attached to different modules. I don't know if there is a mechanism for adding implicit none at the top of a submodule, or if it inherits it from the parent module. If there's no means of declaring it locally at the top of the file, then I think it will have to be declared in each of the procedures, to explicitly communicate that implicit typing is prohibited.

FWIW (pobably not much! 😄 ) these views are solely my personal views and preferences and I respect everyones views on this forum. I just wanted to share my logic, and stimulate further conversation.

@szaghi
Copy link
Member Author

szaghi commented Jan 20, 2016

these views are solely my personal views and preferences and I respect everyones views on this forum. I just wanted to share my logic, and stimulate further conversation.

This should be valid for all of us 😄

This is a collaborative effort trying to summarize our views (not only mine, your, of @specific_member).

@zbeekman I agree on all.

@cmacmackin
Copy link
Collaborator

@zbeekman
I'm pretty sure that you can declare implicit none at the top of a submodule--a piece of example code someone had written as a demonstration (see below), and which I use to test FORD, had it at any rate. I had assumed that this wouldn't be inherited from the parent, but I don't think I ever actually read that anywhere.

!! Demonstration of Fortran 2008 submodules
! J. Overbey - 1 Dec 2009

!! The module and submodules have the following hierarchy:
!!
!!               module
!!                  |
!!                  |
!!             submodule1
!!                  |
!!                  |
!!             submodule2
!!                 /|\
!!               /  |  \
!!             /    |    \
!!           /      |      \
!! submodule3  submodule4  submodule5

module module
  implicit none
contains ! Empty contains section allowed in Fortran 2008
end module module

submodule (module) submodule1
  implicit none
end

submodule (module : submodule1) submodule2
end submodule

submodule (module : submodule2) submodule3
!! Documentation!
end submodule submodule3

submodule (module : submodule2) submodule4
endsubmodule

submodule (module : submodule2) submodule5
endsubmodule submodule3

@zbeekman
Copy link
Member

Another case, worth considering, is if one writes generic procedures, and then Fortran includes them into a module. In that case I would argue that implicit none is required for the procedure, since there is no guarantee or context when viewing the stand alone procedure, that implicit none will ever be applied.

@cmacmackin
Copy link
Collaborator

👍, although personally I have a strong dislike of include statements and have never been satisfied with them as a technique for generic programming.

@rouson
Copy link
Collaborator

rouson commented Jan 20, 2016

On Jan 20, 2016, at 6:07 AM, Izaak Beekman [email protected] wrote:

Should we then, following the no-dead-code principle, mandate that you should never use a default statement if you are really sure the code will take one of the earlier branches?

I may be in the minority here, or this opinion may be considered "extreme" by some, but I would actually argue yes to this point. Increasing the size of a codebase and the inclusion of unreachable code is a no-no in my book. Yes it may be seen as defensive coding, but if it's actually impossible to enter that branch of the select statement (even with potentially incorrect user input, corrupted data files, or out of memory errors) then it is "dead" code, and exists for no other reason than to give you a warm fuzzy feeling that you have been a diligent defensive programmer. Additionally this also means there is no way to test its correctness.

I haven’t read much of this dialogue yet, but I’ll briefly add that dead code could lead to confusion

subroutine cognitive_dissonance(only_2_possibilities)
logical, intent(in) :: only_2_possibilities
select case(only_2_possibilities)
case(.true.)
print *,"Sure."
case(.false.)
print *,"No worries."
case default
print *,”Hmm... please remind me why I wrote this. There must have been a reason."
end select
end subroutine

My philosophy is that you should strive to get 100% test coverage during unit tests, regression tests and integration tests.

Excellent point.
If you can't devise a way to execute a branch of an if statement, or a case from a select case construct during testing, because it is impossible to reach, even with bad user input etc. then that code should be removed. There is no point having it, and it distracts and bloats code from the important parts. Further more, in my experience, assuming a constant error rate (N_errors/statement) has been a decent assumption.

There is empirical data to back this up. I cite it in the 14th slide of
this presentation:

https://www.image.ucar.edu/public/TOY/2008/focus2/Presentations/TALKRouson.pdf

(sorry for not numbering the slides)

As a corollary, the longer you make your code, the more bugs you will introduce. Of course this is a gross approximation, but once you start adding a lot of "dead" code, there's no way to test if it is actually correct, and it obfuscates the other parts of the code that are executed by making them further apart on the page, etc.

Dead code also violates the agile development method of test-driven development (TDD) in which one writes only enough code to pass the test. The test is the specification for what must be written. If one wants more code to be written, then one must say so in a test.

Damian

@tclune
Copy link
Collaborator

tclune commented Jan 20, 2016

Of course, if there are really are only 2 possibilities, then the entire section would be clearer with a logical. And for an integer, do you really want the reader to form a proof that the ‘default’ block cannot be reached. I’d recommend “always” having a default block, but putting it at a low priority. :-) Best practices can be a real time keeper, so prioritize for the cases that are more likely to cause problems.

On a slightly related note, what do others think about explicitly including the procedure name in the end clause. E.g.,

subroutine foo( …)
end SUBROUTINE FOO

One could omit “FOO” or “SUBROUTINE FOO” and have the same code.

In the good old days, I liked the fact that my editor did the autocompletion for me, and for long procedures, I still think the redundancy is a good thing. (TM) However, for short procedures it merely induces a burden to change 2 lines rather than 1 if I decide to rename the procedure. This is actually a relatively common mistake that I encounter when doing simple refactoring.

So, I’d stop doing it, but have not been bothered enough to go about determining how to disable this aspect of Emacs - esp. in a manner that allows the redundancy for the occasional long procedure. (I still spend a lot of time editing other people’s legacy code with longer procedures.)

Cheers,

  • Tom

On Jan 20, 2016, at 9:37 AM, Damian Rouson [email protected] wrote:

On Jan 20, 2016, at 6:07 AM, Izaak Beekman [email protected] wrote:

Should we then, following the no-dead-code principle, mandate that you should never use a default statement if you are really sure the code will take one of the earlier branches?

I may be in the minority here, or this opinion may be considered "extreme" by some, but I would actually argue yes to this point. Increasing the size of a codebase and the inclusion of unreachable code is a no-no in my book. Yes it may be seen as defensive coding, but if it's actually impossible to enter that branch of the select statement (even with potentially incorrect user input, corrupted data files, or out of memory errors) then it is "dead" code, and exists for no other reason than to give you a warm fuzzy feeling that you have been a diligent defensive programmer. Additionally this also means there is no way to test its correctness.

I haven’t read much of this dialogue yet, but I’ll briefly add that dead code could lead to confusion

subroutine cognitive_dissonance(only_2_possibilities)
logical, intent(in) :: only_2_possibilities
select case(only_2_possibilities)
case(.true.)
print *,"Sure."
case(.false.)
print *,"No worries."
case default
print *,”Hmm... please remind me why I wrote this. There must have been a reason."
end select
end subroutine

My philosophy is that you should strive to get 100% test coverage during unit tests, regression tests and integration tests.

Excellent point.
If you can't devise a way to execute a branch of an if statement, or a case from a select case construct during testing, because it is impossible to reach, even with bad user input etc. then that code should be removed. There is no point having it, and it distracts and bloats code from the important parts. Further more, in my experience, assuming a constant error rate (N_errors/statement) has been a decent assumption.

There is empirical data to back this up. I cite it in the 14th slide of this presentation https://www.image.ucar.edu/public/TOY/2008/focus2/Presentations/TALKRouson.pdf (sorry for not numbering the slides).
As a corollary, the longer you make your code, the more bugs you will introduce. Of course this is a gross approximation, but once you start adding a lot of "dead" code, there's no way to test if it is actually correct, and it obfuscates the other parts of the code that are executed by making them further apart on the page, etc.

Dead code also violates the agile's test-driven development (TDD) methodology in which one only write just enough code to pass the test. This test is the specification for what must be written. If one wants more code to be written, they must say so in a test.

Damian


Reply to this email directly or view it on GitHub #10 (comment).

Thomas Clune, Ph. D. [email protected]
Software Infrastructure Team Lead
Global Modeling and Assimilation Office, Code 610.1
NASA GSFC
MS 610.1 B33-C128
Greenbelt, MD 20771
301-286-4635

@LadaF
Copy link
Collaborator

LadaF commented Jan 20, 2016

My practice is to add the name after the end subroutine if the subroutine
gets longer than one page in my editor. It's a subjective criterion and
sometimes I forget and add it later when I find end subroutine for which
I can't immediately see the beginning with the name.

Vlad

2016-01-20 14:56 GMT+00:00 tclune [email protected]:

Of course, if there are really are only 2 possibilities, then the entire
section would be clearer with a logical. And for an integer, do you really
want the reader to form a proof that the ‘default’ block cannot be reached.
I’d recommend “always” having a default block, but putting it at a low
priority. :-) Best practices can be a real time keeper, so prioritize for
the cases that are more likely to cause problems.

On a slightly related note, what do others think about explicitly
including the procedure name in the end clause. E.g.,

subroutine foo( …)
end SUBROUTINE FOO

One could omit “FOO” or “SUBROUTINE FOO” and have the same code.

In the good old days, I liked the fact that my editor did the
autocompletion for me, and for long procedures, I still think the
redundancy is a good thing. (TM) However, for short procedures it merely
induces a burden to change 2 lines rather than 1 if I decide to rename the
procedure. This is actually a relatively common mistake that I encounter
when doing simple refactoring.

So, I’d stop doing it, but have not been bothered enough to go about
determining how to disable this aspect of Emacs - esp. in a manner that
allows the redundancy for the occasional long procedure. (I still spend a
lot of time editing other people’s legacy code with longer procedures.)

Cheers,

  • Tom

On Jan 20, 2016, at 9:37 AM, Damian Rouson [email protected]
wrote:

On Jan 20, 2016, at 6:07 AM, Izaak Beekman [email protected]
wrote:

Should we then, following the no-dead-code principle, mandate that you
should never use a default statement if you are really sure the code will
take one of the earlier branches?

I may be in the minority here, or this opinion may be considered
"extreme" by some, but I would actually argue yes to this point. Increasing
the size of a codebase and the inclusion of unreachable code is a no-no in
my book. Yes it may be seen as defensive coding, but if it's actually
impossible to enter that branch of the select statement (even with
potentially incorrect user input, corrupted data files, or out of memory
errors) then it is "dead" code, and exists for no other reason than to give
you a warm fuzzy feeling that you have been a diligent defensive
programmer. Additionally this also means there is no way to test its
correctness.

I haven’t read much of this dialogue yet, but I’ll briefly add that dead
code could lead to confusion

subroutine cognitive_dissonance(only_2_possibilities)
logical, intent(in) :: only_2_possibilities
select case(only_2_possibilities)
case(.true.)
print *,"Sure."
case(.false.)
print *,"No worries."
case default
print *,”Hmm... please remind me why I wrote this. There must have been
a reason."
end select
end subroutine

My philosophy is that you should strive to get 100% test coverage
during unit tests, regression tests and integration tests.

Excellent point.
If you can't devise a way to execute a branch of an if statement, or a
case from a select case construct during testing, because it is impossible
to reach, even with bad user input etc. then that code should be removed.
There is no point having it, and it distracts and bloats code from the
important parts. Further more, in my experience, assuming a constant error
rate (N_errors/statement) has been a decent assumption.

There is empirical data to back this up. I cite it in the 14th slide of
this presentation <
https://www.image.ucar.edu/public/TOY/2008/focus2/Presentations/TALKRouson.pdf>
(sorry for not numbering the slides).
As a corollary, the longer you make your code, the more bugs you will
introduce. Of course this is a gross approximation, but once you start
adding a lot of "dead" code, there's no way to test if it is actually
correct, and it obfuscates the other parts of the code that are executed by
making them further apart on the page, etc.

Dead code also violates the agile's test-driven development (TDD)
methodology in which one only write just enough code to pass the test. This
test is the specification for what must be written. If one wants more code
to be written, they must say so in a test.

Damian


Reply to this email directly or view it on GitHub <
#10 (comment)
.

Thomas Clune, Ph. D. [email protected]
Software Infrastructure Team Lead
Global Modeling and Assimilation Office, Code 610.1
NASA GSFC
MS 610.1 B33-C128
Greenbelt, MD 20771
301-286-4635


Reply to this email directly or view it on GitHub
#10 (comment)
.

@nncarlson
Copy link
Collaborator

The default clause issue is an interesting one. I tend to agree with @zbeekman here. My policy is to omit it if it is not needed, with one exception. If the intent is that one of the case stanzas is always executed but it is not completely clear from the immediate context that this will be the case, then I do add the default clause with an assertion:

    case default
      ASSERT(.false.)
    end select

Here ASSERT is part of a simple macro-based DBC system that I use. ASSERT asserts that its argument is true, so the strange idiom ASSERT(.false.) will trigger an error. In this case the default clause serves to document to the reader the intent that one of the case clauses is supposed to be executed.

@szaghi
Copy link
Member Author

szaghi commented Jan 20, 2016

To all: wonderful... how many notes I am keeping...

@nncarlson what does DBC mean? It is very interesting, Fortran errors handling is a mistery for me, can you elaborate more?

@nncarlson
Copy link
Collaborator

@szaghi, Design-by-contract. The full-blown concept (which I really don't know) is much more involved than my simple use of it with things like preconditions and postconditions. This is built into some languages. C handles it (if I remember correctly) via macros like I am doing. I've got two macros ASSERT and INSIST. ASSERT(expr) expands to a statement that calls an "abort" procedure if expr is not true, which prints out file and line number, and then halts execution. That is unless the code is compiled with the (boolean) macro NDEBUG defined, in which case ASSERT expands to a Fortran comment. INSIST is just like ASSERT except it is always live. The policy is that these should never be triggered; e.g., they are not for checking user input or errors that could result from bad user input -- proper error checking should be used for that.

I've found this to be so incredibly helpful during code development at immediately pinpointing programming errors that would otherwise require hours (or days even) to track down. It also serves as very useful documentation to the reader. For example, if a subroutine takes two array arguments that should be of the same size, I'll do something like

subroutine foo (a, b)
  integer :: a(:), b(:)
  ASSERT(size(a) == size(b))

@zbeekman
Copy link
Member

yes I'll add my 👍 for @tclune's ESMF requiring keywords for optional args technique. Makes code more legible if arguments are given sensible names and argument lists are not huge.

@nncarlson
Copy link
Collaborator

@tclune the ESMF keyword technique is very cool. I'd not seen this before, but I'm sure to use it now. I'd leave the definition of the unused type local and not use a module though; it's just several extra lines which I find preferable to introducing a new module dependency, new file, etc. I wonder if this isn't also a new language feature that could be proposed (if it hasn't already); a special argument list marker (e.g. ':') that indicates all subsequent arguments must use keywords? Off hand I don't see why such arguments must always be optional.

@zbeekman
Copy link
Member

True, it appears you could force all arguments to use keywords by creating an inaccessible type and using it as the first dummy argument.

@szaghi
Copy link
Member Author

szaghi commented Jan 21, 2016

Dear All,

I have added a first draft of this guideline. I hope to have added all in topic suggestions, for the off topic ones I have taken a note for other discussions (coming soon).

For the implicit templating trick I have added a comment here, but I realized during typing it that I have not completely understood it... please amend it with a more clear example, at least for me 😄

See you soon.

@milancurcic
Copy link
Collaborator

@nncarlson @zbeekman

Off hand I don't see why such arguments must always be optional.

In the specific case of ESMF and @tclune correct me if I am wrong, the motivation behind this is to ensure future backward-compatibility. In this way, the API can be extended in the future versions while not breaking the procedure calls that were valid for previous versions of the library.

@tclune
Copy link
Collaborator

tclune commented Jan 21, 2016

Yes - that was the ESMF motivation as I understood it at the time.

But it can only be used for requiring keywords for optional arguments. The reason is simply that the “Unusable” argument itself must be optional (think about it). And you cannot put non-optional arguments after optional arguments.

You could have a non-enforceable policy to always use keywords, but I would generally find that style to be overly verbose. Even modestly sized statements would tend to wrap beyond one line. But if your interfaces support small numbers of reasonably short argument names, I’d have no problem with it. The readers of that code would probably appreciate it.

  • Tom

On Jan 21, 2016, at 11:34 AM, Milan Curcic [email protected] wrote:

@nncarlson https://github.com/nncarlson @zbeekman https://github.com/zbeekman
Off hand I don't see why such arguments must always be optional.

In the specific case of ESMF and @tclune https://github.com/tclune correct me if I am wrong, the motivation behind this is to ensure future backward-compatibility. In this way, the API can be extended in the future versions while not breaking the procedure calls that were valid for previous versions of the library.


Reply to this email directly or view it on GitHub #10 (comment).

Thomas Clune, Ph. D. [email protected]
Software Infrastructure Team Lead
Global Modeling and Assimilation Office, Code 610.1
NASA GSFC
MS 610.1 B33-C128
Greenbelt, MD 20771
301-286-4635

@rouson
Copy link
Collaborator

rouson commented Jan 21, 2016

On Jan 21, 2016, at 5:21 AM, tclune [email protected] wrote:

There is a general consensus that the committee needs to slow down to prevent large gaps between the standard and actual implementations by the vendors.

I wish that slowing down would actually help vendors catch up, but I think there is more than one vendor the has stalled in the sense that they haven’t addd any significant new standards support in in quite a while. One vendor achieved Fortran 2003 compliance roughly 5 years ago and is still missing arguably the biggest Fortran 2008 feature: coarrays. Half a decade should have been enough time to implement the feature. Now amount of slowing down on the committee’s side will address this.

Another vendor finished Fortran 2003 compliance roughly 2 years ago and hasn’t added even one significant 2008 feature. In some arenas, this would lead to consolidation in the market, but it seems unlikely, given the various competitive advantages each vendor has on their own hardware (if they make hardware).

D

@nncarlson
Copy link
Collaborator

But it can only be used for requiring keywords for optional arguments. [...] And you cannot put non-optional arguments after optional arguments.

@tclune are you sure about that? A quick test with ifort and nagfor suggests it possible to have non-optional arguments follow optional arguments (in the procedure declaration). It is true that in a procedure reference non-keyword arguments are invalid after keyword arguments have been encountered, but that is exactly what is desired.

@rouson
Copy link
Collaborator

rouson commented Jan 21, 2016

On Jan 21, 2016, at 6:05 AM, Izaak Beekman [email protected] wrote:

There is a general consensus that the committee needs to slow down to prevent large gaps between the standard and actual implementations by the vendors.

I would argue that the vendors need to get their $h!7 together! (At the risk of sounding like Linus Torvalds discussing Nvidia: the slow pace of development from some/most of the major compiler vendors has been pretty abhorrent. I'm not suggesting that their job is easy, but it seems like some of them have really been dragging their heels. And then when they do support a feature from a > 95 standard, it's half finished, bug riddled work. end rant)

I see that Zaak and I appear to be twins here of late. I just saw the above post after sending an almost identical message myself.

What is needed is for customers with deep pockets to make standards-requirement a contractual obligation for delivering major hardware. If a center buying a $100-million dollar supercomputer start stipulating Fortran 2008 compliance in their request for proposals for major acquisitions, then we would suddenly see Fortran 2008 compliance everywhere. This has happened and has worked on occasion. I suspect it would happen more often if users were louder. The Fortran community is a very quiet community. If any of you receive a user survey from a large supercomputer center, please respond and please make it very clear in the comments the Fortran 2008 compliance is pressing need for your project and tell everyone you know with accounts to do the same. At some point, the centers will have to take notice and the vendors will have to respond.

Just for little perspective, the Fortran 2003 standard was published in 2004 and the Fortran 2008 standard was published in 2010! Cray’s compiler has been Fortran 2008 compliant since at least 2014. The fact that no one has caught up with them or done the same thing now more than five years after the standard was published is a crime.

Damian

@tclune
Copy link
Collaborator

tclune commented Jan 21, 2016

You are correct. I was typing quickly and rationalizing the answer that I expected. Of course, an optional argument before a non-optional argument is not really optional, is it?

  • Tom

On Jan 21, 2016, at 3:27 PM, Neil Carlson [email protected] wrote:

But it can only be used for requiring keywords for optional arguments. [...] And you cannot put non-optional arguments after optional arguments.

@tclune https://github.com/tclune are you sure about that? A quick test with ifort and nagfor suggests it possible to have non-optional arguments follow optional arguments (in the procedure declaration). It is true that in a procedure reference non-keyword arguments are invalid after keyword arguments have been encountered, but that is exactly what is desired.


Reply to this email directly or view it on GitHub #10 (comment).

Thomas Clune, Ph. D. [email protected]
Software Infrastructure Team Lead
Global Modeling and Assimilation Office, Code 610.1
NASA GSFC
MS 610.1 B33-C128
Greenbelt, MD 20771
301-286-4635

@SourcerersApprentice
Copy link
Collaborator

On 1/21/2016 12:29 PM, Damian Rouson wrote:

On Jan 21, 2016, at 6:05 AM, Izaak Beekman
[email protected] wrote:

There is a general consensus that the committee needs to slow down
to prevent large gaps between the standard and actual implementations
by the vendors.

I would argue that the vendors need to get their $h!7 together! (At
the risk of sounding like Linus Torvalds discussing Nvidia: the slow
pace of development from some/most of the major compiler vendors has
been pretty abhorrent. I'm not suggesting that their job is easy, but
it seems like some of them have really been dragging their heels. And
then when they do support a feature from a > 95 standard, it's half
finished, bug riddled work. end rant)

I see that Zaak and I appear to be twins here of late. I just saw the
above post after sending an almost identical message myself.

What is needed is for customers with deep pockets to make
standards-requirement a contractual obligation for delivering major
hardware. If a center buying a $100-million dollar supercomputer start
stipulating Fortran 2008 compliance in their request for proposals for
major acquisitions, then we would suddenly see Fortran 2008 compliance
everywhere. This has happened and has worked on occasion. I suspect it
would happen more often if users were louder. The Fortran community is
a very quiet community. If any of you receive a user survey from a
large supercomputer center, please respond and please make it very
clear in the comments the Fortran 2008 compliance is pressing need for
your project and tell everyone you know with accounts to do the same.
At some point, the centers will have to take notice and the vendors
will have to respond.

Just for little perspective, the Fortran 2003 standard was published
in 2004 and the Fortran 2008 standard was published in 2010! Cray’s
compiler has been Fortran 2008 compliant since at least 2014. The fact
that no one has caught up with them or done the same thing now more
than five years after the standard was published is a crime.

Damian


Reply to this email directly or view it on GitHub
#10 (comment).

Having seen both sides here, both working for a vendor, and at other
times being unable to afford the price of a compiler, I would remark
that unless a vendor is married to a hardware manufacturer, they are
probably getting squeezed pretty hard, and probably don't have the
resources to do the required development. I think there is a high
likelihood that as more people demand modern compiler features these
vendors without connections will eventually be forced out.

I would also remark that the development of debugging capability is even
more abysmal than compiler development. Many debuggers including gdb are
unable to examine basic Fortran 90 data structures like linked lists,
which have been in the language for some 25 years.

@cmacmackin
Copy link
Collaborator

There is a branch of GDB which can handle at least some of the F90
features, although I can't remember off the top of my head whether it can
manage pointers. Also, which commonly used vendor (other than GFortran)
isn't associated with a hardware manufacturer? NAG, I guess, but they
aren't that big a player.
On 22 Jan 2016 2:51 am, "SourcerersApprentice" [email protected]
wrote:

On 1/21/2016 12:29 PM, Damian Rouson wrote:

On Jan 21, 2016, at 6:05 AM, Izaak Beekman
[email protected] wrote:

There is a general consensus that the committee needs to slow down
to prevent large gaps between the standard and actual implementations
by the vendors.

I would argue that the vendors need to get their $h!7 together! (At
the risk of sounding like Linus Torvalds discussing Nvidia: the slow
pace of development from some/most of the major compiler vendors has
been pretty abhorrent. I'm not suggesting that their job is easy, but
it seems like some of them have really been dragging their heels. And
then when they do support a feature from a > 95 standard, it's half
finished, bug riddled work. end rant)

I see that Zaak and I appear to be twins here of late. I just saw the
above post after sending an almost identical message myself.

What is needed is for customers with deep pockets to make
standards-requirement a contractual obligation for delivering major
hardware. If a center buying a $100-million dollar supercomputer start
stipulating Fortran 2008 compliance in their request for proposals for
major acquisitions, then we would suddenly see Fortran 2008 compliance
everywhere. This has happened and has worked on occasion. I suspect it
would happen more often if users were louder. The Fortran community is
a very quiet community. If any of you receive a user survey from a
large supercomputer center, please respond and please make it very
clear in the comments the Fortran 2008 compliance is pressing need for
your project and tell everyone you know with accounts to do the same.
At some point, the centers will have to take notice and the vendors
will have to respond.

Just for little perspective, the Fortran 2003 standard was published
in 2004 and the Fortran 2008 standard was published in 2010! Cray’s
compiler has been Fortran 2008 compliant since at least 2014. The fact
that no one has caught up with them or done the same thing now more
than five years after the standard was published is a crime.

Damian


Reply to this email directly or view it on GitHub
<
#10 (comment)
.

Having seen both sides here, both working for a vendor, and at other
times being unable to afford the price of a compiler, I would remark
that unless a vendor is married to a hardware manufacturer, they are
probably getting squeezed pretty hard, and probably don't have the
resources to do the required development. I think there is a high
likelihood that as more people demand modern compiler features these
vendors without connections will eventually be forced out.

I would also remark that the development of debugging capability is even
more abysmal than compiler development. Many debuggers including gdb are
unable to examine basic Fortran 90 data structures like linked lists,
which have been in the language for some 25 years.


Reply to this email directly or view it on GitHub
#10 (comment)
.

@muellermichel
Copy link
Collaborator

I think that we should face one reality: Fortran is de facto today the language of a rich but niche market. Furthermore, as pointed out above, selling software has become increasingly hard, even for HPC. Paying HP 100 million? No problem. Paying PGI, Allinea and Roguewave 10 million (i.e. 20 person years if it by miracle all went into R&D, realistically more like 5 years)? Suddenly a problem. The only ones who've basically fixed this are Cray, by basically being the Apple of HPC (vertical integration). And indeed their compilers tend to be among the most compliant, at least AFAIK.

So my point being: We simply have to live with the fact that 2008 compliance is still way out for some compilers. The question is, whether portability is more important to the best practices proposed here, or whether it is "modernness" of code. I tend to favor portability. It would be interesting to get the other opinions on this.

Edit: One more idea on this: How about we add some visual markers to the Guide that displays the compiler compatibility for each recommendation? Maybe format it as a html table with unicode symbols? [cm] = checkmark


[cm] All Compilers

[cm] GNU [x] g95 [x] ifort [x] PGI [cm] Cray [x] IBM

@zbeekman
Copy link
Member

I think @muellermichel's suggestion of marking recommendations with their compiler compatibility is a very sad, but required necessity. We should make sure we include the version numbers too, in the event that the list doesn't keep pace with compiler releases.

I also think that it is important to strike a compromise between portability and "modern-ness" on the one hand I don't think that you should write new code assuming the worst, most antiquated systems should be able to run it; if you did that you'd be stuck writing Fortran 77! On the other hand, some features just aren't supported (even if they claim to be) by compiler vendors, and this can cause a real headache.

If only there were a tool that would help translate modern Fortran into something more widely understood by todays compilers, to free us from the tyranny of the slow/crummy implementation of new language features...

@cmacmackin
Copy link
Collaborator

Well, if you want a modern language which can be converted into an older one, there's always Vala, which gets translated to C before being compiled. Not really relevant, but it's a cool project because it is an object-oriented language which can produce (as well as use) libraries written in C. It also has C-like performance, apparently. It would be entirely possible for people to write a similar tool for Fortran, though not enjoyable... I certainly am not nominating myself!

@rouson
Copy link
Collaborator

rouson commented Jan 22, 2016

Portability versus modernness is a very difficult question and I'm not confident there is one reasonable answer that can be clearly be identified as the correct answer (or even the consensus answer) in the abstract (meaning without a funded project with deadlines to drive the decisions). I'll offer that one of the best pieces of advice I ever received was from the review my publisher solicited when I submitted my book proposal. I was planning to use Fortran 95 to emulate OOP in my book. The reviewer commented, "If you really want the book to have lasting value, use the OOP features of Fortran 2003." That was early in 2007 and I didn't know of any compilers that supported the OOP features of 2003 at the time. It took upwards of six months to find out that IBM supported the features and to find IBM compiler test team engineer and Fortran standards committee member Jim Xia to work with me on learning the features and writing a journal article on design patterns in modern Fortran.

Arguably that reviewer's one comment had a greater impact on my career than any other single sentence of advice. It is a very large part of why I'm doing what I'm doing today.

If we want our work to have lasting value, I suggest we choose modernness over portability, assuming portability is defined by the ability to use a wide range of compilers.

Alternatively, if portability is defined by the ability to use a wide range of hardware, I suggest that most of Fortran 2008 and even a goodly chunk of Fortran 2015 can be used on a very wide range of hardware if one installs the current development branch of gfortran. And for many people who would be uncomfortable installing the development branch themselves, I suggest installing OpenCoarrays, which will install gfortran for you if it does not detect gfortran in your path or if the gfortran version in your path is old. (@zbeekman, let's get that next release out the door. ;) )

@szaghi
Copy link
Member Author

szaghi commented Jan 22, 2016

@muellermichel wonderful idea!

@zbeekman I agree.

@rouson I think that Fortran 2003 is sufficient mature to be considered widely supported, do you agree?

I am not stressing all 2003 features and I am not using many 2008 ones, but my feeling is that with Intel, GNU and IBM compilers the way to 2003-modern-ness is feasible (I am facing with some bugs, but workarounds exist).

I still think that portability, understood as a synonim of practicality, counts more than modern-ness at the end. For example, I would very very like to try Parametrized Derived Type, but its support is so limited that my portability/practicality considerations stop me to try.

P.S. OpenCoarrays rocks!

@rouson
Copy link
Collaborator

rouson commented Jan 22, 2016

On Jan 22, 2016, at 7:55 AM, Stefano Zaghi [email protected] wrote:

@muellermichel https://github.com/muellermichel wonderful idea!

@zbeekman https://github.com/zbeekman I agree.

@rouson https://github.com/rouson I think that Fortran 2003 is sufficient mature to be considered widely supported, do you agree?

I am not stressing all 2003 features and I am not using many 2008 ones, but my feeling is that with Intel, GNU and IBM compilers the way to 2003-modern-ness is feasible (I am facing with some bugs, but workarounds exist).

I would go much further than that. I don’t think we should let any one particular compiler hold us back when there exists a free, open-source compiler that is easily installable on nearly any hardware. Adopt gfortran and let the rest of the compiler vendors spin their wheels trying to catch up. I grew weary with waiting and trying to come up with a common set of well-supported features. I just won’t live that way anymore.

Moreover, parallelism is of paramount importance in the multi-core, many-core era. In that regard, coarray Fortran is a game-changer. The power of being able to write code with no reference to external libraries and have that code scale from a dual-core laptop to a 100,000-core supercomputer is just too much too big a deal to ignore. You can do that today with gfortran (and with Cray).

I still think that portability, understood as a synonim of practicality, counts more than modern-ness at the end. For example, I would very very like to try Parametrized Derived Type, but its support is so limited that my portability/practicality considerations stop me to try.

I always find it a bit confusing that people use the word to mean “using the feature with different compilers” rather than “using the feature with different hardware”. In the mainstream, non-technical world, I think most people use the term “portability” to mean moving objects physically from one place to another. It is only in the Fortran world that the unevenness of compiler support has forced us into the is awkward use of the word to mean moving code across compiler space. Most importantly, if we choose this awkward use of the word, then we will make decisions today that will seem outdated next year or the year after.

Damian

@muellermichel
Copy link
Collaborator

Alternatively, if portability is defined by the ability to use a wide range of hardware

Yes, that's how I meant portability. I would actually narrow it down further: Look at the Top 100 list, take out the systems that have been introduced or upgraded in the last, say, 3 years and look at their architecture - what compilers / versions can they support?

I suggest that most of Fortran 2008 and even a goodly chunk of Fortran 2015 can be used on a very wide range of hardware if one installs the current development branch of gfortran.

Here's where I don't agree: Today we should get ready for hardware architectures with increasing amounts of shared memory parallelism, namely today Tesla cards and Intel MIC, but it could be a number of other architectures further down the road. At least with GPUs we are limited to (a) Cray systems with Cray or PGI compiler or (b) other systems with PGI compiler. Gfortran just isn't an alternative there, and probably won't be for quite some time.

@zbeekman
Copy link
Member

@muellermichel : OpenCoarrays is actively working on supporting these architectures; the future may be closer than you think!

@muellermichel
Copy link
Collaborator

@zbeekman : Interesting to know. How soon would you guess can this be ready for production? By that I mean support for all relevant modern GPU features like unified memory, GPUdirect, dynamic parallelism; furthermore support for different storage orders used for different architectures, and last but not least support for at least one usable debugger and profiler, and all that without breaking for large codebases. I've had quite some experience with OpenACC now, which strives for these goals at an IMO more modest level and IMO it is not stable enough for large codebases like, say WRF. The amount of stuff that still breaks between each point release is tremendous.

@szaghi
Copy link
Member Author

szaghi commented Jan 22, 2016

@rouson

I would go much further than that. I don’t think we should let any one particular compiler hold us back when there exists a free, open-source compiler that is easily installable on nearly any hardware. Adopt gfortran and let the rest of the compiler vendors spin their wheels trying to catch up.

😄 I agree, GNU gfortran is my preferred compiler now (again).

I always find it a bit confusing that people use the word to mean “using the feature with different compilers” rather than “using the feature with different hardware”.

Indeed, I would like to be totally unaware of compiler and or hardware: in my Utopia I would like to deal with only Fortran, but as @muellermichel stated above, the reality is more sad.

I think most people use the term “portability” to mean moving objects physically from one place to another.

This is not my case: I use portability often in the contest move the code from one architecture to another. In particular with architecture I mean the available combination of hardware/compiler. In the past I have faced with the impossibility to select my preferred compiler, thus porting the code from my architecture to another was problematic (there were some non standard codes). In my mind, being standard = being portable = being practicable (what an approximation of English I am using... order -10).

It is only in the Fortran world that the unevenness of compiler support has forced us into the is awkward use of the word to mean moving code across compiler space.

Yes, but I think that we coarrays is changing many things in this regards: they are a standard way to exploit the hardware being unaware of the hardware itself. At least, this is my hope.

@rouson
Copy link
Collaborator

rouson commented Jan 22, 2016

@muellermichel, if you're considering using coarray Fortran (CAF) with gfortran on the architectures you mentioned, then I suggest posting to the OpenCoarrays Google Group or emailing us at [email protected] and describing what you'd like to do. I think you have to subscribe to post, but it's a very low-volume list. The most recent message was on 17 December. As @zbeekman, it might not be as far off as you're thinking and there might even be some existing capabilities you could use if you clone the repository. Alessandro has done some encouraging testing on Intel MIC and is working on some support for offloading to NVIDIA. I'm certain there is a long road ahead, but it's likely that more has been done already than you're imagining.

If you're talking about non-CAF calculations, then I'm less knowledgeable, but I would then suggest emailing the gfortran mailing list to describe what you want to do. Again, I think there might be more there than is widely known.

@SourcerersApprentice
Copy link
Collaborator

On 1/22/2016 1:00 AM, Chris MacMackin wrote:

There is a branch of GDB which can handle at least some of the F90
features, although I can't remember off the top of my head whether it can
manage pointers.

GDB's pointer management is present, but incomplete. They have not
supported pointer members of a derived type yet. Despite this lack of
support, I would otherwise say that GDB is one of the best debuggers I
have ever used as far as the feature set is concerned.

Also, which commonly used vendor (other than GFortran)
isn't associated with a hardware manufacturer? NAG, I guess, but they
aren't that big a player.
In my opinion, the only "commonly used compilers" at this point would be
Intel and GFortran. Cray and PGI are limited by their hardware
dependence and I think of them more as boutique players. Other vendors I
had in mind were names like Absoft, NAG, Lahey and Salford. The fact
that they are not commonly used is a reflection of the fact that they
were not able to keep up with optimization technologies or make the jump
to Fortran 2003, and this is why in my opinion they are eventually
doomed. I forget where I heard this, but a few years ago there was a
prognistication circulating that in 10 years, the only players left in
the mainstream PC market would be Intel and Gfortran. A couple of years
down the road now, it still seems to me like things are heading that way.

@nncarlson
Copy link
Collaborator

Other vendors I
had in mind were names like Absoft, NAG, Lahey and Salford. The fact
that they are not commonly used is a reflection of the fact that they
were not able to keep up with optimization technologies or make the jump
to Fortran 2003.

No way would I put NAG in that category. Their 6.0 compiler is nearly
complete 2003 (defined input/output is the only thing it is missing that I
can think of) and many 2008 features (but no submodules or coarrays) and
what it does have is solid. You can't say that for GFortran; its got far
too many bugs with 2003 features (It doesn't work for me) and progress on
fixing them is glacial.

@SourcerersApprentice
Copy link
Collaborator

On 1/22/2016 8:06 AM, Damian Rouson wrote:

I would go much further than that. I don’t think we should let any one
particular compiler hold us back when there exists a free, open-source
compiler that is easily installable on nearly any hardware. Adopt
gfortran and let the rest of the compiler vendors spin their wheels
trying to catch up. I grew weary with waiting and trying to come up
with a common set of well-supported features. I just won’t live that
way anymore.
I have to agree here that the existence of GFortran means we can use
modern Fortran features without fear.

I always find it a bit confusing that people use the word to mean
“using the feature with different compilers” rather than “using the
feature with different hardware”. In the mainstream, non-technical
world, I think most people use the term “portability” to mean moving
objects physically from one place to another. It is only in the
Fortran world that the unevenness of compiler support has forced us
into the is awkward use of the word to mean moving code across
compiler space. Most importantly, if we choose this awkward use of the
word, then we will make decisions today that will seem outdated next
year or the year after.
I have to admit I have always thought of "portability" as movement
between compilers. From my point of view as a Fortran programmer, I
expect architecture to be hidden by the compiler, and differences in
architecture should not affect how I write my Fortran code. I also think
of "portability" as synonymous with "standard conforming", and I have
"portability issues" if I have needs the standard does not address.
However, if I take off my Fortran programmers hat and put on my compiler
makers hat, everything changes and my opinions now better align with the
above statement.

@rouson
Copy link
Collaborator

rouson commented Jan 22, 2016

On Jan 22, 2016, at 11:35 AM, Neil Carlson [email protected] wrote:

No way would I put NAG in that category. Their 6.0 compiler is nearly
complete 2003 (defined input/output is the only thing it is missing that I
can think of) and many 2008 features (but no submodules or coarrays) and
what it does have is solid.

I agree.

You can't say that for GFortran; its got far
too many bugs with 2003 features (It doesn't work for me) and progress on
fixing them is glacial.

What version are you using?

If the related bugs have been reported via gfortran’s Bugzilla site, then please
post a list of the ID numbers. My experience on bug fixes as been
the exact opposite. I reported one bug in the Fall that was fixed within 24 hours
and another that was fixed within 48 hours. A certain amount of it is probably
related to distilling the bug reports down to very isolated demonstrators (15-20
lines is optimal and nearly always achievable with sufficient effort). Another
big chunk of it probably relates to the fact that I have on a small percentage
of occasions used project funds to pay a developer to fix one or more bugs.

If organizations pay for a license for commercial compilers, then it’s reasonable
that they pay occasionally for support on an otherwise free compiler.

To be sure, I have also seen some bug reports languish with gfortran, but
fortunately for me, there haven’t been any that were showstoppers and I think
that’s a pretty strong statement given the broad cross-section of Fortran 2003,
2008, and even 2015 that I use regularly.

Damian

@SourcerersApprentice
Copy link
Collaborator

On 1/22/2016 11:35 AM, Neil Carlson wrote:

Other vendors I
had in mind were names like Absoft, NAG, Lahey and Salford. The fact
that they are not commonly used is a reflection of the fact that they
were not able to keep up with optimization technologies or make the jump
to Fortran 2003.

No way would I put NAG in that category. Their 6.0 compiler is nearly
complete 2003 (defined input/output is the only thing it is missing that I
can think of) and many 2008 features (but no submodules or coarrays) and
what it does have is solid. You can't say that for GFortran; its got far
too many bugs with 2003 features (It doesn't work for me) and progress on
fixing them is glacial.


Reply to this email directly or view it on GitHub
#10 (comment).

I am not too familiar with the NAG compiler, and in light of this
information would remove them from this list.

@rouson
Copy link
Collaborator

rouson commented Jan 22, 2016

On Jan 22, 2016, at 11:44 AM, SourcerersApprentice [email protected] wrote:

I have to admit I have always thought of "portability" as movement
between compilers. From my point of view as a Fortran programmer, I
expect architecture to be hidden by the compiler, and differences in
architecture should not affect how I write my Fortran code. I also think
of "portability" as synonymous with "standard conforming", and I have
"portability issues" if I have needs the standard does not address.
However, if I take off my Fortran programmers hat and put on my compiler
makers hat, everything changes and my opinions now better align with the
above statement.

Thanks for sharing this perspective. I’ve had such a hard time relating to the common
uses of the word so hearing more perspectives helps. I even hear people use the word
for simply adding or swapping features, e.g. porting to coarray Fortran from MPI. Clearly
the word has been overloaded with a lot of meanings.

Damian

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests