Jump to content

Clover Problems and Solutions


ErmaC
3,206 posts in this topic

Recommended Posts

yes exactly. i turned on indexing for my ESP to test..  and i get and see busy.

 

any what is happening if you use CloverDaemon vs LogoutHook is that the diskarbitationd will "umount" any user mounted filesystems.

so i put in a short test to wait for this activity to complete.

 

here is v1.11

 

 

nvram_v1.11.zip

Link to comment
Share on other sites

yes exactly. i turned on indexing for my ESP to test..  and i get and see busy.

 

any what is happening if you use CloverDaemon vs LogoutHook is that the diskarbitationd will "umount" any user mounted filesystems.

so i put in a short test to wait for this activity to complete.

 

here is v1.11

Here's some method we try to mount ESP I and Sherlock discuss about:

  • see ESP is already mounted, if yes, then we just find the mount point and dump to it directly
  • if ESP do not mount, we mount it
  • if mount fail, then, see if it is resource busy problem, if yes
  • then, umount -f ESP, mount it again
  • if not resource busy problem, but disk damage problem, then we try fsck to fix it
  • if fix successful, we dump NVRAM to EFI otherwise we dump to root(/)

 

syscl

Link to comment
Share on other sites

the resource busy is only due to attempting a unmount a busy filesystem. so its ok to have that. i am trying to avoid forced unmount which will lead to corruption.

 

so now it the filesystem is mounted it will use it. otherwise we proceed to mount it. never had a failure to mount.

 

otherwise i follow that logic - use any already mounted ESP to avoid expensive unmount and re-mount. 

 

the problem was that diskarbitrationd was in the process of unmounting filesystems and these the script was in competition!!! 

 

now it waits until the logout process for disk unmount completes. then we proceed. 

 

Note: the only thing i didn't check for was failure to mount ESP due to corruption!

if this script is working - we can add this feature.

Link to comment
Share on other sites

the resource busy is only due to attempting a unmount a busy filesystem. so its ok to have that. i am trying to avoid forced unmount which will lead to corruption.

 

so now it the filesystem is mounted it will use it. otherwise we proceed to mount it. never had a failure to mount.

 

otherwise i follow that logic - use any already mounted ESP to avoid expensive unmount and re-mount. 

 

the problem was that diskarbitrationd was in the process of unmounting filesystems and these the script was in competition!!! 

 

now it waits until the logout process for disk unmount completes. then we proceed. 

 

Note: the only thing i didn't check for was failure to mount ESP due to corruption!

if this script is working - we can add this feature.

Use mount point which has been mounted is a good method and will significant boost the dump progress(on HDD). But the problem is as Sherlock described, he still encountered NVRAM dump to root case using your method. That's why we need to use unmount just in case. If you use diskutil umount, it will force unmount a mount point if there's resource busy as well.

 

Edited: I am now looking in v1.11...

 

syscl

Link to comment
Share on other sites

sherlock problems as i understand is

1) had mounted ESP

2) uses index - which causes unmount - resource busy - happens after we write nvram.plist

3) and it was failing to find ESP because disk arbitration was unmounting it after logout.

 

i recreated the issue on my system:

 

try v1.11 to see if it solves this - it  did on my box

Link to comment
Share on other sites

if you have not turned off indexing for the ESP, meaning there is a .Spotlight-V100, then after writing nvram.plist, mdutil will lock it up for a short time.

Good idea to 'touch /Volumes/ESP/.metadata_never_index'

That will disable spotlight indexing for the ESP.... no need to adjust with mdutil.

  • Like 4
Link to comment
Share on other sites

@Sherlock and @tluck 

 

Please try v1.13, I bumped the version because I added a lot fix inside the code.

 

80.save_nvram_plist.local.zip

 

Another bug we need to fix:

Log information is incorrect, this is because the script is called twice: one for logout and one called by CloverDaemon. Both tluck and my script have this issue. I have two method to fix this bug:

 

  • Disable CloverDaemon to call 80.save_nvram_plist.local
  • The hard one I am trying: trap the shutdown signal, if the script is in logout stage/normal stage, just perform normal behave, otherwise(in shutdown stage) do nothing

 

Any suggestions will be appreciated.

 

Thank you,

syscl

  • Like 1
Link to comment
Share on other sites

@syscl,

 

fun to collaborate!

 

there are nice improvement in error checking - but in 1 key area we are back to the same discussion as before,

you are solving the busy problem in a different way.

 

1) problem with busy:

if using CloverDaemon, any user mounted filesystem get unmounted by diskarbitrationd upon logout.

so if there are some ESP and other filesystems mounted they will become unmounted after logout - exactly at the time the rc.shutdown script is running.

 

i solved this conflict, by putting in check/wait loop for /sbin/umount initiated by diskarbitration to complete

you solve by doing unmount/remount - which does the same thing in effect - puts in a delay

 

However, I still say and ask why use umount -f to cause corruption? this is a bad idea.

if ESP is busy during an unmount - just wait
if ESP is busy due to indexing etc - don't unmount - just use it.

 

Note: all that said - this is about using CloverDaemon, and is not an issue with LogoutHook method.

 

2) issues:

 

 

Another bug we need to fix:

Log information is incorrect, this is because the script is called twice: one for logout and one called by CloverDaemon. Both tluck and my script have this issue. I have two method to fix this bug:

 

  • Disable CloverDaemon to call 80.save_nvram_plist.local
  • The hard one I am trying: trap the shutdown signal, if the script is in logout stage/normal stage, just perform normal behave, otherwise(in shutdown stage) do nothing

check for running  2nd time - ok fine.

but perhaps it is better to pick a method. it should fix your second item above:

 

so do 1 of these:

1) use CloverDaemon to run during shutdown
or
2) use LogoutHook method during logout (before shutdown).

 - when I use LogoutHook, i split CloverDaemon into 2 scripts - 1 for start service and 1 for stop service (LogoutHook)

 

note: I also turn off indexing for ESP. 

 

3) suggestions on output - explicit redirecting output to the rc.shutdown.log file when running from within CloverDaemon which is also writing to rc.shutdown.log can cause the log to be out of order.

 

4) still not sure why we need check for a change or not. 

cost option 1): check for change - read the nvram.plist - if changed write the nvram.plist

cost option 2): always write the nvram file.

 

cost option 1 = 2 times IO vs cost option 2

 

yes, i realize nvram values don't always change - but i like to have all nvram.plist files to not only have the same values and but have same date stamp.

  • Like 1
Link to comment
Share on other sites

I wish to remind this topic is not for discussion.

This thread for codes propositions.

@Sherlock @tluck and EmuVariable users

Update 80.save_nvram_plist.local to v1.15:

  • v1.14:  tluck's wait resource busy function, unmount -f case should almost gone:
function _spinWait()
{
    #
    # tluck: first introduce v1.14
    # wait for any background daemons(diskatribrationd will launch /sbin/umount)
    # on filesystems to complete process after logout
    # use a debug loop to watch the processes after logout
    #
    local gCommand=$1
    ps -ef |grep -v -i grep |grep -i "${gCommand}"      >/dev/null
    if [[ $? == 0 ]]; then
        gTurn=0
        while [ ${gTurn} -eq 0 ];
        do
          #
          # do spin wait
          #
          sleep 1
          ps -ef|grep -v -i grep |grep -i "${gCommand}" >/dev/null
          gTurn=$?
        done
    fi

    if [[ ${DEBUG} != 0 ]]; then
        echo mount
        mount
        echo
    fi
}
  • v1.15: check and disable ESP indexing, this will significantly boost the shutdown and reboot progress, and there's no more resource busy:
function _disableESPIndex()
{
    local MountPoint=$1
    local gIndexConfigf="${MountPoint}/.metadata_never_index"
    if [ ! -f ${gIndexConfigf} ]; then
        #
        # Spotlight index is enable, now we turn it off
        #
        echo "${gDmpTimeStamp}  Disable indexing on ${MountPoint}"
        touch ${gIndexConfigf}
    fi
}

Note: I'd recommend all users(Macintosh and Hackintosh) disable Spotlight indexing on ESP which will significantly boost their shutdown/reboot progress once ESP get mounted. 

 

Here's the code

80.save_nvram_plist.local 5.zip

 

v1.16 will try to fix the log inaccurate issue(script being called twice)

 

Thank you,

syscl

  • Like 4
Link to comment
Share on other sites

v.14 and v.15 works fine for me. (i already use .metadata_never_index)

 

also, maybe we could also remove .Spotlight? 

 

v16 - how does the 80.nvram script get called twice? do you mean from LogoutHook and CloverDaemon?

 

nice job on the scripts, btw.

  • Like 1
Link to comment
Share on other sites

 

Note: I'd recommend all users(Macintosh and Hackintosh) disable Spotlight indexing on ESP which will significantly boost their shutdown/reboot progress once ESP get mounted. 

 

 

EFI can't be added to exclude list

Screen Shot 2017-02-10 at 9.58.29.png

Link to comment
Share on other sites

@Sherlock @tluck and EmuVariable users

Update 80.save_nvram_plist.local to v1.15:

  • v1.14:  tluck's wait resource busy function, unmount -f case should almost gone:
function _spinWait()
{
    #
    # tluck: first introduce v1.14
    # wait for any background daemons(diskatribrationd will launch /sbin/umount)
    # on filesystems to complete process after logout
    # use a debug loop to watch the processes after logout
    #
    local gCommand=$1
    ps -ef |grep -v -i grep |grep -i "${gCommand}"      >/dev/null
    if [[ $? == 0 ]]; then
        gTurn=0
        while [ ${gTurn} -eq 0 ];
        do
          #
          # do spin wait
          #
          sleep 1
          ps -ef|grep -v -i grep |grep -i "${gCommand}" >/dev/null
          gTurn=$?
        done
    fi

    if [[ ${DEBUG} != 0 ]]; then
        echo mount
        mount
        echo
    fi
}
  • v1.15: check and disable ESP indexing, this will significantly boost the shutdown and reboot progress, and there's no more resource busy:
function _disableESPIndex()
{
    local MountPoint=$1
    local gIndexConfigf="${MountPoint}/.metadata_never_index"
    if [ ! -f ${gIndexConfigf} ]; then
        #
        # Spotlight index is enable, now we turn it off
        #
        echo "${gDmpTimeStamp}  Disable indexing on ${MountPoint}"
        touch ${gIndexConfigf}
    fi
}

Note: I'd recommend all users(Macintosh and Hackintosh) disable Spotlight indexing on ESP which will significantly boost their shutdown/reboot progress once ESP get mounted. 

 

Here's the code

attachicon.gif80.save_nvram_plist.local 5.zip

 

v1.16 will try to fix the log inaccurate issue(script being called twice)

 

Thank you,

syscl

 

today made nvram file in root.

post-980913-0-62147800-1486709702_thumb.png

 

Supreme-MBP:~ supreme$ mdutil -s /Volumes/EFI/

/Volumes/EFI:

Indexing and searching disabled.

Supreme-MBP:~ supreme$ 

 

indexing and spin wait can't help rare case. i tested 1.15 script for 3 days.

 

i will discuss it with syscl

 

thank you.

  • Like 1
Link to comment
Share on other sites

EFI can't be added to exclude list

attachicon.gifScreen Shot 2017-02-10 at 9.58.29.png

How about this one?

touch /Volumes/EFI/.metadata_never_index

Then remount to see if index is disable or not.

 

Here's my output

syscls-MacBook:~ syscl$ mdutil -s /Volumes/EFI/
/Volumes/EFI:
	Indexing and searching disabled.
syscls-MacBook:~ syscl$ 

@Slice do I miss something?

 

syscl

  • Like 1
Link to comment
Share on other sites

How about this one?

touch /Volumes/EFI/.metadata_never_index

Then remount to see if index is disable or not.

 

Here's my output

syscls-MacBook:~ syscl$ mdutil -s /Volumes/EFI/
/Volumes/EFI:
	Indexing and searching disabled.
syscls-MacBook:~ syscl$ 

@Slice do I miss something?

 

syscl

Looks like an issue with my current 10.7.5

$ ls /Volumes/
\Data		ESP		Macintosh	Windows
EFI		MacHD		QEFI
...
$ diskutil umount /dev/disk0s1
disk0s1 was already unmounted
...
$ mdutil -s /Volumes/EFI/
/Volumes/EFI:
	No index.


  • Like 1
Link to comment
Share on other sites

AMD GREENLAND (Polaris 12?) (?)

--> linux kernel 

 

ati.c

  /* Polaris12 */
  {0x6980, 0x00000000, CHIP_FAMILY_GREENLAND, "AMD Radeon Polaris 12",        kNull },
  {0x6981, 0x00000000, CHIP_FAMILY_GREENLAND, "AMD Radeon Polaris 12",        kNull },
  {0x6985, 0x00000000, CHIP_FAMILY_GREENLAND, "AMD Radeon Polaris 12",        kNull },
  {0x6986, 0x00000000, CHIP_FAMILY_GREENLAND, "AMD Radeon Polaris 12",        kNull },
  {0x6987, 0x00000000, CHIP_FAMILY_GREENLAND, "AMD Radeon Polaris 12",        kNull },
  {0x699F, 0x00000000, CHIP_FAMILY_GREENLAND, "AMD Radeon Polaris 12",        kNull },
...
  "Tobago",
  "Ellesmere",
  "Baffin",
  "Greenland",
  ""
};
...
ati.h

...
 CHIP_FAMILY_TOBAGO,
 CHIP_FAMILY_ELLESMERE, /* Polaris 10 */
 CHIP_FAMILY_BAFFIN,   /* Polaris 11 */
 CHIP_FAMILY_GREENLAND, /* Polaris 12 */
 CHIP_FAMILY_LAST
} ati_chip_family_t;
...
ati.c/h -> polaris12_IDs.zip

 

ErmaC

  • Like 5
Link to comment
Share on other sites

No, that sounds like another problem.

@Zenith432

Glad to see you again!

Did you see a message "boot0af error" with rev 4003 (and early?)?

I use boot0 from chameleon on MBR and boot1xalt on exfat partition.

I don't remember difference between chameleon boot0 and clover boot0af.

When I try boot6 and boot7 binaries from r4003 in clover souceforge files area - they spontaneously reboot.  spontaneous reboot is caused by cpu exception when an interrupt vector table is not properly installed (= double fault).

When I build boot6 and boot7 myself from r4003 (uses customized list of dxe) - then default build of low-ebda works ok.  But when I build with --genpage, it hangs after printing T or X character.  This hang I investigated and it is because of corruption I fixed in r4004.  Now in r4004, my private build of boot6 and boot7 work with both low-ebda and --genpage.

Link to comment
Share on other sites

I think you are right in 4004.

I spoke about another problem you probably knows: boot0af changed when you implemented exFAT booting.

See boot0af from working 2652. boot0af.s.zip

I don't know if it related but I think you can find a problem.

 

 

PS. Register DI must be inited?

Link to comment
Share on other sites

Problematic USB stick have format by manufacture

Command (? for help): p
Disk /dev/disk1: 15695872 sectors, 7.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 33F3B597-49D0-47D6-A447-819364F3731A
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 15695838
Partitions will be aligned on 2048-sector boundaries
Total free space is 8158 sectors (4.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            8192        15695871   7.5 GiB     0700  Linux/Windows data

 #: id  cyl  hd sec -  cyl  hd sec [     start -       size]
------------------------------------------------------------------------
*1: 0B    0 130   3 -  977   5  52 [      8192 -   15687680] Win95 FAT-32
 2: 00    0   0   0 -    0   0   0 [         0 -          0] unused      
 3: 00    0   0   0 -    0   0   0 [         0 -          0] unused      
 4: 00    0   0   0 -    0   0   0 [         0 -          0] unused      

Link to comment
Share on other sites

×
×
  • Create New...