hey there homu
I remember you used notepad to note down your results and as I got lazy with noting results down, I googled a hour for this here https://i.imgur.com/p2ekVcP.png
in short,you press arrow and the number in the cell increases/decreases
it seems that you can edit the layout of the counter
if you are interested and have got openoffice.calc I can help you setting it up : >
you get openoffice here https://www.openoffice.org/download/ (whole packet)
and that counter is an inbuild software of openoffice
its well hidden,but get openoffice installed first and give me a screenshot of your taskbar as mine is in german.... : s
The upper two (Akashi Lv50 and Lv107) represent to which star level the 12.7cm can be improved by using 10 screws only. The author tried improving ten such turrets. To illustrate, the first 12.7cm twin mount (shown in the first row) were improved ten times, while the 7th, 8th & 10th trials ended up failure.
The lower two might be more useful, they show how many screws in total were consumed to improve a 12.7cm to star max. The first mount used 13 screws, the second one used 13 & thrid one 11. The ten turrets used 11.9 screws in average.
When using Akashi Lv107, the average is 10.7 screws.
Talking about improved success rates, do you think sparkling both Akashi and the 2nd ship increases it? I got a perfect streak improving the pasta gun to kai with a lvl35 Akashi, not sure if lucky or kirakira affects the upgrade :3, however I didnt rushed and did the improvement in 10 days :v
@homu do you think its worth mentioning this in one of your blogs?
@marc im currently leveling my akashi through pvp,and she is most of the time at moral 97+
im upgrading most stuff to +8 and use safeglider from +8-->+9 and +10
yet i didnt fail a single improvement to +7/+8,might be RNG or whatnot and in contrary i failed very often at +9 and +10....
its gonna be difficult to actually test it,as every equipment seems to have its own succesrate (for example upgrading a 12.7 twin gun seems to have a success much more often than a type 91 shell)
and screws are hard to get : v
A lot of these tests, I feel like the best approach is to automatically track and combine data so that we can see patterns. A lot of sites seem to have developed this approach as well. Mainly to construction, development and ships drops. An example is https://myfleet.moe/
Collecting data from many different sources will make it much more difficult to analyze, but also offers the opportunities to see things we might have missed.
Information and data could be the next generation of improvement to game clients. This could be like telling which guns are the most suited for CL or what artillery set up works best in which situations. With enough statistics and understanding of game mechanics, we could even simulate whole sorties thousands of times and test different compostion to find different solutions.
first program a software which collects this data on several thousands of clients ...maybe DJ could?
as we dont have access to this kind of software yet,we are forced to run experiments and note down the results slowly
Would probably require a dedicated team to work on it. Al.ternatively we could just wait for other sources to eventually test it. In terms of testing difficult things like improvement rates, I don't think there are any easy solutions.
There's just too many factors. All I can say about AP shells is that I used a slider because I wasn't willing to eat any AP shell losses. Rare equipment that eats itself is usually not a candidate for going beyond +6 either.
I could make another max OTO and resist sliding over the next week but that won't contribute much.
As for thoughts on morale/sparkling, any volunteers to consider another approach? Tank your Akashi down to 0 morale and try a few upgrades. It's pretty much guaranteed to get to +5, and going to +6 fails once in a while, but perhaps a few low morale attempts might provide immediate results as to whether or not morale is a factor at all.
Hmm I failed a level 1 improvement yesterday while i was +1 some torpedoes. I think it was the fubuki 3 oxygen one. Otherwise it was the quad torpedo. I found it pretty strange as I have never failed a level 1 before. But i assume the chance does exist.
My akashi is always level 99 and sparkling at 100 morale. Other ships were not sparkled. Fubuki level 71 ot 72. KTKM level 150 were used.
I think it might be too soon to assume improvement fatigue exists. I think it might be just that you are doing so many improvements that the small chance to fail adds up. For example accuracy maxes out at 97% and improvement probably has something like max % to succeed (perhaps 99% for example). Doing 5-10 developments in a row will give it a reasonable chance to fail so that one of us will come across this situation while doing multiple crafts.
For example, I just added expedition logging on my local copy, so I think improvement logging is possible too. But the main question is who would use KCV with those features and how to collect the data.
Hey, I was wondering if you would be interested in helping me out with compiling what we know so far about accuracy and evasion modifiers into a set of convenient tables. It seems we've gotten to the point where enough tests have been done now that there's actually a fair bit we know, but the individual results are very scattered about. I've setup a sandbox page (http://kancolle.wikia.com/wiki/Sandbox/Accuracy_Evasion_Tables) with a rough outline of how I envision it to be laid out. The exact presentation will probably change along the way, and I'd also like to get sources linked eventually. However, first I think it'd be good if we just got all the results crammed in first and let the rest sort itself out later once we have a better idea how it'll look.
Sure man. That's a great idea, I was thinking about it too since the current combat page mentions few about it. The template gives very precise information, I like it, but on the otherhand most ppl will just regard them as solid basis without checking the sources themselves.... That would be my concern.
You got a point there on collecting all data we have in hand. As you may have noticed, so far I collected a few. I was hoping that all the scattered confidential accuracy tests can be first translated & complied in one place, serving as an English database, from which we can finally provide some accurate information on your compliation tables.
Cramming the results in -> Sure, for the part of overweight penalty we got bunch of tests. For the others you may do at your discretion.
Well, that's the nature of disseminating information to a large and varied audience. Few people are going to care about the sources, and most just want something they can take at face value. We just have to try to get it right. I'll start filling in stuff in a bit, but I'm sure there's some stuff that you know better than I do so I'll be counting on your backup.
Homu do you have an article you can point out to regarding the benefits of luck to BBs? I have raised a question in this (http://kancolle.wikia.com/wiki/EliteBB) page but it seems like it is a topic that is not really certain yet?
Does luck actually raise evasion and accuracy?
Is it true that all day time cut ins are unaffected by luck?
Thanks for the msg. All my posts are on my profile page you may check that.
Concerns on the luck stat recently rose among ppl due to some tentative formula stating "+10 luck = +1 accuracy". Other than that I'm not sure if luck also boosts evasion, for it's not shown in the evasion term calculation. You may check the article "Accuracy Test Results (source to be cited".
That's true... Although the destroyer shelling comparison test was the only related one I found.
Neverthelss, I think people would rather spending maruyu on their Kitakami, for 1 luck = 1% CI chance, instead of 0.1% accuracy.
As for luck stat & day CI chance, I haven't seen many tests on that. I personally did some experiments on artillery spotting, but none of them was specifically for this factor.
I've designed a test for that: comparing high level Nagato & Mutsu (41cm*2, T32 Radar, Seaplane) at 1-1-1, each of them were accompanied by a CVL equipped w/ Saiun *4. The test was halted since Nagato had refused to show up for a long time...
Hi! Welcome to KC wikia. Different ppl have different ways in leveling their ships, but the majority ones go to map 2-4 first node, then map 3-2 first node. For leveling DDs and CLs ppl prefer doing it at map 1-5 first 3 nodes, but do not enter the boss node since it will boost your HeadQuarter level. A higher HQ level means tougher enemies in event maps (and every extra operations, eg. 1-5, 2-5, 3-5 & 5-5).
If you mean how to defeat the current event maps with your fleet, it really depends on how high your current HQ level is, and ship level too...
Night battle? Entering night battle from day battle won't boost your experience gained. It just increase your ammo consumption but another chance to destroy the enemy..
If you need tutorials for leveling, here's a page might be suggested for you: I know there are a lot of info, but just take those most useful one for yourself.
Hey, Homuhomu. I decided to delete old info about abyssal AA cut-in and replace it with new one as I do feel confident about it.
I remember all my bombers being shot down on first node of E5 (Tsu was the only ship with any AA-related eq there - two high angle guns at that) several times. Other people reported it as well. I could only find this one right now - he suspected CL Oni because we didn't know her eq back then.
When doing 6-1 my Kaga would often fail to hit anyone (or only scratched) and every time I checked her planes after retreating I would see she had more than half of her bombers shot down, even when the only node I fought was B - that with one Nu-class - however, Tsu-class is there as well.
Coincidentally, if Kaga managed to kill one or two enemies in air phase (she had Tenzan Tomonaga and Ryuusei 601), she would only lose 1-3 in each slot.
These facts heavily imply one of enemies was capable of performing AA cut-in - again, only Tsu-class has proper eq. I think Tsu has built-in Anti Air Fire Detector just like Akizuki - who also is capable of performing AA cut-in with just two high angle guns.
I came here with one other thing. The section about night special attacks has always bugged me because for mixed cut-in "130% x 2 (consecutive)" is used while "150% x 2 (simultaneous)" stands next to torpedo cut-in. Why different words?
It seems Tsu class is highly suspected to have built-in AAFD. I'm sure ppl are gonna support the revision if they witnessed the same thing, please revise at your discretion. Btw how about her equipment? I mean 5inch Twin Dual Purpose Cannon. Does it have built-in AAFD as well, or just Tsu class?
As for night battle special attacks, the note "simultaneous" and "consecutive" were preserved from the old edition (back to the early 2014). I'm not sure about their implication either, but the calculation is definitely applied separately on each simultaneous / consecutive hit (like what you've tested there).
Thank you for the dedication and sharing the video (: I'm still looking for more tests with higher sample sizes and finally we can write something credible regarding the skilled lookouts
I'm not very active on the KCwiki so it took me a while to notice, but you've done quite some research on the game mechanics and collected alot of statistics. Now, I think you (and many others) have often encountered the problem whereby, say you switch one EQ and see 50% cut-in, and then another with 52% cut-in, and wonder if it's significant. I'm not extremely good at math, but I worked out a formula that seems to make sense (and works practically). See: Calculating Confidence Intervals for Underlying Probabilities
Given a number of attempts and successes (say attempted to craft 100 times and succeeded 10 times) it calculates confidence intervals for which the actual probability lies in, rather than just saying, "oh, it's 10%, but not very confident". So, for this example, 10/100 success gives a 90% confidence interval that the actual probability lies from 6.23% to 16.22%. If it 1000 attempts with 100 successes, then the 90% interval becomes 8.56% to 11.69%, which is narrower range as you can see.
This would help all the research you've done, and allow us to draw better conclusions on what changes the chances significantly and what doesn't. Please have a look through the post that I made and let me know what you think. I have no idea how good your math is, so if you do spot any errors, please let me know. On the other hand, if you need the MatLab code to run such a script to help you calculate, also let me know, or anything else for that matter.
It's really my pleasure to see another guy who's strong in computation... I've met another guy before, @Mathiaszealot, who introduced me to the formula:
Trials = ceil( (p*(1 - p) / ( E / Z) ^2) )
where E = error, and Z = 1.6 ( for 90% confidence )
I'm not majoring in math (engineering actually) so I just used the formula to calculate errors for my exp'ts. The problem is as the actual p gets closer to 100%, the equation will not be useful. I was trying to use the beta function but I'm just too slow to learn it by myself... My labtop is about to go off so I'll talk to you later. Thank you so much for visiting & sharing this!
Thanks very much for having a look! I'm taking Engineering as well (that's why I said my math is not very good =p ). Could you elaborate on the formula you posted and how you'd use it? I'd like to see if it agrees with mine. What is p, E, and Z? Maybe a simple 10/100 successes/attempts example?
Thanks for the explanation and the list of values! I'll be able to work with this for some time.
I went to the wiki article which Mathiaszealot linked in the comments to check it out. It'll probably take me some time to digest it fully, but I did a quick check. Using the worked example on the wiki (have a quick look at the Posterior Probability Density Function Example) for 7 heads (successes) and 3 tails (so, 10 attempts), I ran that in my Matlab script with the values 7/10 and had it output the lower bounds as 43.56% and upper as 86.49% for CI of 90%. Then, since the PDF is provided with the graph in the example, I ran an integral using those very same bounds. Result? 90%, exactly what is needed! So yes, the formula stands up to verification!
I'll go work with the one that you just gave me, since it sounds interesting, and looks like it can be used for verification of other statistical sources, say, if the probabilities reported by someone looks suspicious. I'll see what I can do with it!
P.S. I forgot to mention that I really appreciate all the hard work in collecting statistics that you've been doing! And it's great to talk to another engineer who takes his free time to analyse games in such depth! People would think we're crazy =P
Update: I've placed the Matlab script along with the blogpost as well, so feel free to use it if you have Matlab!
I've taken a look at the idea behind the standard error calculation, and I understand it better now. It's, as you have mentioned, great for any probabilities around 50%, which would work pretty well for most combat calculations I think. The error bounds are symmetrical, and they're kind of fixed, because they assume that the probability is 50%, which is when the largest error interval results. Another way of putting it is that it errs on the side of caution. So if you are at around the tail ends with high or low probabilities, the error estimated is going to be significantly larger than what it should be (causing you to work harder to achieve a narrower bound), which, although isn't necessarily a bad thing, as the work is already tedious as it is, it's best to know how how few samples are needed to ensure enough confidence.
As also mentioned Mathiaszealot, "the proper method is to use the integral of a beta function", which I believe what I've managed to work out essentially does (it uses the Incomplete Beta Function which is a more general form of the beta function). I also realised that there's one more example in the Checking Whether a Coin is Fair wikipedia article which I tested against. What the example does is to use the standard error calculation to generate the bounds of the confidence interval for a 12000 coin toss with 5961 heads, and assumes that the underlying probability is 5961/12000 = 49.68% (which is close to 50%). To verify, I plugged the same figures into the MATLAB script, and it generated the same intervals. So this basically shows that the standard error calculation is valid for probabilities around 50%, which is what we already know and realise (just that now, it's been verified).
So I suppose, if you need something quick, you could use the table you generated, since it's much easier to refer to. However, if you want a more thorough calculation, or if you're at the computer and have access to the MATLAB script, it would be better to plug the values in and let the computer churn out the exact confidence bounds.