...but that's why I started this thread, because I consider it unfair.
Well, unfair from your point of view, but very fair from my point of view, as a user of GitHub.
The problem is, you want the audience of GitHub to be a group that it isn't, so that you can see things rated by different criteria than developers use and need. That is being unfair to developers, who are looking for ratings on GitHub by what's good and useful to
them, not to end users.
And really, to be fair to developers of end-user retrocomputing products, they should not have to put their code up on GitHub, or even open-source it, in order to be able to get ratings. Any hardware developer who puts her stuff up on GitHub runs the risk of it being grabbed by a Chinese manufacturer who then floods the market with cheap copies of it, probably killing her income if she sells the device herself. So under such conditions, asking developers of hardware to put their schematics and code on GitHub in order to earn "thumbs up tokens" would be
very unfair to them.
Some people benefit from it, but not the original author. Like I said - if you use a FreHD, or any other piece of hardware / software and consider it useful, consider upvoting it on GitHub, it will likely help the developer in some form or the other.
Well, as someone who
actually presents his GitHub projects to potential employers and clients, I disagree.
And I'd suggest that the developer of FreHD, if its popularity amongst end-users is going to be important to someone he's talking to, present something that actually shows that popularity, i.e., his sales figures, not stars on GitHub. Someone paying money for his product is a
far more convincing argument that it's useful to end users than a star on GitHub.
The latter speaks for you, and this is exactly how I think - however, a Star on GitHub is likely based on a much more informed decision and I consider this a much more valuable metric compared to other social media "thumbs up" BS.
I see no reason to believe that a star on GitHub is an informed decision; it's more likely just an, "I think this is cool" or "I want this in my list of things to come back and look at again" decision. As an indicator of developer skill, that's not nearly as useful as someone upvoting an answer on Stack Overflow, which almost invariably means, "This person gave me useful technical information that helped me solve a problem" or "this is good and well presented technical information that answers the question."
Given that GitHub is used by people "in the know" - and then you could also look at the number of forks, merge requests, etc., activity.
You could, but those metrics are just proxies, and often enough poor ones at that. Forks are more often a bad sign than a good one; long-lasting forks usually indicate a project that won't add features that users want or, worse yet, a project that's no longer being maintained. Forks also exist because you
have to fork in order to create a PR; some people delete them afterwards (I usually do), others don't.
PRs are also just a proxy. Sure, a good project that needs better features can generate PRs, but a project full of bugs can also generate PRs to fix all those bugs. A project with a number of committed developers may generate few PRs because developers simply commit their code directly, whereas a project making much less progress may have many more PRs because most of the code comes from "drive by" developers (who by nature are much less familiar with the project) rather than core developers. And it's important to note that in general projects that take contributions more through PRs than "core" developers will be slower moving because PRs are a very high-ceremony, process-heavy way of getting code on to the main branch. (This is by design: they're to allow developers not familiar with the project and its processes, and with poor communication links to the core developers, to contribute.)
Cloning a project in itself requires at least some skills, whereas consuming a SO answer does not.
Cloning a project requires pressing a button. Voting on an SO answer requires pressing a button. Starring a project on GitHub requires pressing a button. That's about equal in skill level as far as I can tell.
This is definitely a metric of much higher quality than upvotes on Reddit, Quora, or SO, where 99% of them answer questions of junior developers.
Sure, which is why I
already explained to you that
all of these "thumbs up" metrics alone are near useless. (In fact, I explained in great detail the problem with the upvote metric on Stack Overflow, and how to get past it to the useful data.)
Looking at that those data takes work. Not a huge amount, but you actually have to read the answer or the code or whatever and use professional judgement as to whether it's well done and what it says about the skill level of the writer or developer. I know you wish that there were some magic number that you could look at, rather than having to have that level of skill at evaluation, but there ain't, and there never will be, especially when that magic number involves how many people click a "thumbs up" button on the Internet. And of course it's even worse when the "thumbs up" is not necessarily a rating, but a bookmark, as stars are on GitHub.
...and hence I conclude that it would be good if she / he got some stars! So, given that Stars are informed, we should IMHO also include other criteria than formal software engineering criteria for assessing the projects.
Well, I'm glad you
agree with me that formal software engineering criteria should not be used for assessing projects.
From many of your comments, including your misunderstanding of the difficulty levels of the two projects in question, and that you appear to think that I was judging the projects above by "formal software engineering criteria," I think you are not a developer of hardware and software.
I was judging the two projects not by any formal criteria but
by how easy the developer made it for me to understand their system, make their hardware and use their code. I.e., I was looking at it from the point of view of, "can I download this and easily make the device" or "can I download this and easily find and fix a bug"? The difference between the two projects in those areas is vast, and easily discernible to anybody who
actually does these things.
But someone who
doesn't do these things is likely to confuse actual usability for developers with:
...all the usual bean counter-style of arguments (documentation and what have you)
as you do.
....but I am seeing FreHD as a gift to the community, and it doesn't seem to be appreciated.
Well, part of that is because it's a "gift" that's such a serious PITA to actually make use of in its form on GitHub that even I, who am perfectly capable of of making one from that repo if I were willing to put in
that much work, would just go out and buy one instead if I really wanted one. Going through that mess is not worth saving $70 or whatever it costs.
As I said in my previous message, go and generate the Gerbers, build the software, etc. yourself, and see just how hard that project makes it. If you're not capable of doing this, you're not capable of judging the quality of the design files, source code and documentation there.
And I still think that a PCB replica requires much less engineering effort and ingenuity than FreHD.
Well, now that I've had a closer look at it, I'd say the PCB replica requires somewhat more ingenuity on the hardware side (though lesson the software side), but
far more engineering effort. I say this as someone who knows and has done the kind of development and debugging both projects need.
And that the public recognition doesn't reflect that. What if Einstein's theory of General Relativity was rejected because he didn't use the right font, mathematical conventions, or paper quality?
Papers get rejected all the time because they don't use the right mathematical conventions: if someone can't read the equations, they can't understand the paper. Papers also get rejected all the time because of poor writing: again, if someone can't understand it's not a good paper. The idea may be great, but if it's not communicated well, it stays with the author rather than goes out to the world.
(By the way, I've written an academic paper, shepherded it through peer review, and had it accepted, so I'm pretty familiar with the process.)
Again, please don't feel offended by my opinion. It's just what I think.
I'm not offended; I'm just trying to explain to you where you've gone very wrong in your idea of how the world should work and how you're evaluating these projects.